diff --git "a/data.json" "b/data.json" --- "a/data.json" +++ "b/data.json" @@ -5,9 +5,9 @@ "authors": [ "Jie Xiao", "Ruili Feng", + "Han Zhang", "Zhiheng Liu", "Zhantao Yang", - "Han Zhang", "Yurui Zhu", "Xueyang Fu", "Kai Zhu", @@ -95,7 +95,7 @@ "authors": [ "Yuhang Zang", "Hanlin Goh", - "Joshua Susskind", + "Joshua M. Susskind", "Chen Huang" ], "abstract": "Existing vision-language models exhibit strong generalization on a variety of visual domains and tasks. However, such models mainly perform zero-shot recognition in a closed-set manner, and thus struggle to handle open-domain visual concepts by design. There are recent finetuning methods, such as prompt learning, that not only study the discrimination between in-distribution (ID) and out-of-distribution (OOD) samples, but also show some improvements in both ID and OOD accuracies. In this paper, we first demonstrate that vision-language models, after long enough finetuning but without proper regularization, tend to overfit the known classes in the given dataset, with degraded performance on unknown classes. Then we propose a novel approach OGEN to address this pitfall, with the main focus on improving the OOD GENeralization of finetuned models. Specifically, a class-conditional feature generator is introduced to synthesize OOD features using just the class name of any unknown class. Such synthesized features will provide useful knowledge about unknowns and help regularize the decision boundary between ID and OOD data when optimized jointly. Equally important is our adaptive self-distillation mechanism to regularize our feature generation model during joint optimization, i.e., adaptively transferring knowledge between model states to further prevent overfitting. Experiments validate that our method yields convincing gains in OOD generalization performance in different settings.", @@ -111,10 +111,10 @@ "id": 18844, "title": "Unraveling the Key Components of OOD Generalization via Diversification", "authors": [ - "Harold Benoit", + "Harold Luc Benoit", "Liangze Jiang", "Andrei Atanov", - "Oguzhan Kar", + "Oguzhan Fatih Kar", "Mattia Rigotti", "Amir Zamir" ], @@ -194,7 +194,7 @@ "Aaron Spieler", "Nasim Rahaman", "Georg Martius", - "Bernhard Schoelkopf", + "Bernhard Sch\u00f6lkopf", "Anna Levina" ], "abstract": "Biological cortical neurons are remarkably sophisticated computational devices,temporally integrating their vast synaptic input over an intricate dendritic tree,subject to complex, nonlinearly interacting internal biological processes. A recentstudy proposed to characterize this complexity by fitting accurate surrogate modelsto replicate the input-output relationship of a detailed biophysical cortical pyramidalneuron model and discovered it needed temporal convolutional networks (TCN)with millions of parameters. Requiring these many parameters, however, couldbe the result of a misalignment between the inductive biases of the TCN andcortical neuron\u2019s computations. In light of this, and with the aim to explorethe computational implications of leaky memory units and nonlinear dendriticprocessing, we introduce the Expressive Leaky Memory (ELM) neuron model, abiologically inspired phenomenological model of a cortical neuron. Remarkably, byexploiting a few such slowly decaying memory-like hidden states and two-layerednonlinear integration of synaptic input, our ELM neuron can accurately matchthe aforementioned input-output relationship with under ten-thousand trainableparameters. To further assess the computational ramifications of our neuron design,we evaluate on various tasks with demanding temporal structures, including theLong Range Arena (LRA) datasets, as well as a novel neuromorphic dataset basedon the Spiking Heidelberg Digits dataset (SHD-Adding). Leveraging a largernumber of memory units with sufficiently long timescales, and correspondinglysophisticated synaptic integration, the ELM neuron proves to be competitive onboth datasets, reliably outperforming the classic Transformer or Chrono-LSTMarchitectures on latter, even solving the Pathfinder-X task with over 70\\% accuracy(16k context length). These findings indicate the importance of inductive biasesfor efficient surrogate neuron models and the potential for biologically motivatedmodels to enhance performance in challenging machine learning tasks.", @@ -213,7 +213,7 @@ "Sina Khajehabdollahi", "Roxana Zeraati", "Emmanouil Giannakakis", - "Tim Sch\u00e4fer", + "Tim Jakob Sch\u00e4fer", "Georg Martius", "Anna Levina" ], @@ -238,7 +238,7 @@ "Sri Vardhamanan A", "Saiful Haq", "Ashutosh Sharma", - "Thomas Joshi", + "Thomas T. Joshi", "Hanna Moazam", "Heather Miller", "Matei Zaharia", @@ -341,7 +341,7 @@ "Neelabh Madan", "Deepesh Hada", "Vidit Jain", - "Sonu Mehta", + "SONU MEHTA", "Yashoteja Prabhu", "Manish Gupta", "Ramachandran Ramjee", @@ -420,8 +420,8 @@ "authors": [ "Pablo Pernias", "Dominic Rampas", - "Mats L. Richter", - "Chris J Pal", + "Mats Leon Richter", + "Christopher Pal", "Marc Aubreville" ], "abstract": "We introduce W\u00fcrstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models.A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study.The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1's 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favourably in terms of image quality.We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility.", @@ -495,7 +495,7 @@ }, { "id": 18380, - "title": "Kalman Filter Online Learning from non-Stationary Data", + "title": "Kalman Filter for Online Classification of Non-Stationary Data", "authors": [ "Michalis Titsias", "Alexandre Galashov", @@ -518,7 +518,7 @@ "title": "CODE REPRESENTATION LEARNING AT SCALE", "authors": [ "Dejiao Zhang", - "Wasi Ahmad", + "Wasi Uddin Ahmad", "Ming Tan", "Hantian Ding", "Ramesh Nallapati", @@ -582,7 +582,7 @@ "Pascal Chang", "Jingwei Tang", "Markus Gross", - "Vinicius Da Costa De Azevedo" + "Vinicius C. Azevedo" ], "abstract": "Video editing and generation methods often rely on pre-trained image-based diffusion models. During the diffusion process, however, the reliance on rudimentary noise sampling techniques that do not preserve correlations present in subsequent frames of a video is detrimental to the quality of the results. This either produces high-frequency flickering, or texture-sticking artifacts that are not amenable to post-processing. With this in mind, we propose a novel method for preserving temporal correlations in a sequence of noise samples. This approach is materialized by a novel noise representation, dubbed $\\int$-noise (integral noise), that reinterprets individual noise samples as a continuously integrated noise field: pixel values do not represent discrete values, but are rather the integral of an underlying infinite-resolution noise over the pixel area. Additionally, we propose a carefully tailored transport method that uses $\\int$-noise to accurately advect noise samples over a sequence of frames, maximizing the correlation between different frames while also preserving the noise properties. Our results demonstrate that the proposed $\\int$-noise can be used for a variety of tasks, such as video restoration, surrogate rendering, and conditional video generation.", "type": "Oral", @@ -631,7 +631,7 @@ }, { "id": 18303, - "title": "Output-Domain Focused Inductive Bias on Latent Feature Clusters in Visual Classification", + "title": "Label-Focused Inductive Bias over Latent Object Features in Visual Classification", "authors": [ "Ilmin Kang", "HyounYoung Bae", @@ -788,11 +788,11 @@ }, { "id": 17395, - "title": "Massively Scalable Inverse Reinforcement Learning for Route Optimization", + "title": "Massively Scalable Inverse Reinforcement Learning in Google Maps", "authors": [ "Matt Barnes", "Matthew Abueg", - "Oliver Lange", + "Oliver F. Lange", "Matt Deeds", "Jason Trader", "Denali Molitor", @@ -854,10 +854,10 @@ }, { "id": 17377, - "title": "The Dark Side of the Hyperbolic Moon", + "title": "Shadow Cones: A Generalized Framework for Partial Order Embeddings", "authors": [ "Tao Yu", - "Toni Liu", + "Toni J.B. Liu", "Albert Tseng", "Christopher De Sa" ], @@ -879,7 +879,7 @@ "Nouha Dziri", "Faeze Brahman", "Linjie Li", - "Jena Hwang", + "Jena D. Hwang", "Liwei Jiang", "Jillian Fisher", "Abhilasha Ravichander", @@ -923,13 +923,13 @@ "title": "Are Bert Family Good Instruction Followers? A Study on Their Potential And Limitations", "authors": [ "yisheng xiao", - "Zechen Sun", "Juntao Li", - "Min Zhang", + "Zechen Sun", "Zechang Li", "Qingrong Xia", "Xinyu Duan", - "Zhefeng Wang" + "Zhefeng Wang", + "Min Zhang" ], "abstract": "Language modeling at scale has proven very effective and brought unprecedented success to natural language models. Many typical representatives, especially decoder-only models, e.g., BLOOM and LLaMA, and encoder-decoder models, e.g., Flan-T5 and AlexaTM, have exhibited incredible instruction-following capabilities while keeping strong task completion ability. These large language models can achieve superior performance in various tasks and even yield emergent capabilities, e.g., reasoning and universal generalization. Though the above two paradigms are mainstream and well explored, the potential of the BERT family, which are encoder-only based models and have ever been one of the most representative pre-trained models, also deserves attention, at least should be discussed. In this work, we adopt XML-R to explore the effectiveness of the BERT family for instruction following and zero-shot learning. We first design a simple yet effective strategy to utilize the encoder-only models for generation tasks and then conduct multi-task instruction tuning. Experimental results demonstrate that our fine-tuned model, Instruct-XMLR, outperforms Bloomz on all evaluation tasks and achieves comparable performance with mT0 on most tasks. Surprisingly, Instruct-XMLR also possesses strong task and language generalization abilities, indicating that Instruct-XMLR can also serve as a good instruction follower and zero-shot learner. Besides, Instruct-XMLR can accelerate decoding due to its non-autoregressive generation manner, achieving around 3 times speedup compared with current autoregressive large language models. Although we also witnessed several limitations through our experiments, such as the performance decline in long-generation tasks and the shortcoming of length prediction, Instruct-XMLR can still become a good member of the family of current large language models.", "type": "Poster", @@ -1037,8 +1037,8 @@ "Peter Kairouz", "Sewoong Oh", "Alina Oprea", - "H. Brendan McMahan", - "Vinith Suriyakumar" + "Hugh Brendan McMahan", + "Vinith Menon Suriyakumar" ], "abstract": "Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss insettings where known analytical bounds are not tight. However, existing privacy auditing techniques usually make strong assumptions on the adversary (e.g., knowl-edge of intermediate model iterates or the training data distribution), are tailored to specific tasks, model architectures, or DP algorithm, and/or require retraining the model many times (typically on the order of thousands). These shortcomings make deploying such techniques at scale difficult in practice, especially in federatedsettings where model training can take days or weeks. In this work, we present a novel \u201cone-shot\u201d approach that can systematically address these challenges, al-lowing efficient auditing or estimation of the privacy loss of a model during the same, single training run used to fit model parameters, and without requiring anyaprioriknowledge about the model architecture, task, or DP algorithm. We show that our method provides provably correct estimates for the privacy loss under the Gaussian mechanism, and we demonstrate its performance on a well-established FL benchmark dataset under several adversarial threat models.", "type": "Oral", @@ -1076,7 +1076,7 @@ "id": 19441, "title": "Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference for Recommendation", "authors": [ - "Chunyuan Zheng", + "Haoxuan Li", "Chunyuan Zheng", "Sihao Ding", "Peng Wu", @@ -1097,7 +1097,7 @@ "id": 19055, "title": "Debiased Collaborative Filtering with Kernel-based Causal Balancing", "authors": [ - "Chunyuan Zheng", + "Haoxuan Li", "Yanghao Xiao", "Chunyuan Zheng", "Peng Wu", @@ -1119,7 +1119,7 @@ "title": "MetaCoCo: A New Few-Shot Classification Benchmark with Spurious Correlation", "authors": [ "Min Zhang", - "Chunyuan Zheng", + "Haoxuan Li", "Fei Wu", "Kun Kuang" ], @@ -1137,7 +1137,7 @@ "title": "Jointly Training Large Autoregressive Multimodal Models", "authors": [ "Emanuele Aiello", - "Lili Yu", + "LILI YU", "Yixin Nie", "Armen Aghajanyan", "Barlas Oguz" @@ -1158,8 +1158,8 @@ "Thomas Soares Mullen", "Marine Schimel", "Guillaume Hennequin", - "Christian Machens", - "Michael B. Orger", + "Christian K. Machens", + "Michael Orger", "Adrien Jouary" ], "abstract": "A central objective in neuroscience is to understand how the brain orchestrates movement. Recent advances in automated tracking technologies have made it possible to document behavior with unprecedented temporal resolution and scale, generating rich datasets which can be exploited to gain insights into the neural control of movement. One common approach is to identify stereotypical motor primitives using cluster analysis. However, this categorical description can limit our ability to model the effect of more continuous control schemes. Here we take a control theoretic approach to behavioral modeling and argue that movements can be understood as the output of a controlled dynamical system. Previously, models of movement dynamics, trained solely on behavioral data, have been effective in reproducing observed features of neural activity. These models addressed specific scenarios where animals were trained to execute particular movements upon receiving a prompt. In this study, we extend this approach to analyze the full natural locomotor repertoire of an animal: the zebrafish larva. Our findings demonstrate that this repertoire can be effectively generated through a sparse control signal driving a latent Recurrent Neural Network (RNN). Our model's learned latent space preserves key kinematic features and disentangles different categories of movements. To further interpret the latent dynamics, we used balanced model reduction to yield a simplified model. Collectively, our methods serve as a case study for interpretable system identification, and offer a novel framework for understanding neural activity in relation to movement.", @@ -1175,7 +1175,7 @@ "id": 18059, "title": "CausalTime: Realistically Generated Time-series for Benchmarking of Causal Discovery", "authors": [ - "YUXIAO CHENG", + "Yuxiao Cheng", "Ziqian Wang", "Tingxiong Xiao", "Qin Zhong", @@ -1195,9 +1195,9 @@ "id": 18615, "title": "A Study of Bayesian Neural Network Surrogates for Bayesian Optimization", "authors": [ - "Yucen Li", + "Yucen Lily Li", "Tim G. J. Rudner", - "Andrew Wilson" + "Andrew Gordon Wilson" ], "abstract": "Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query. These objectives are typically represented by Gaussian process (GP) surrogate models which are easy to optimize and support exact inference. While standard GP surrogates have been well-established in Bayesian optimization, Bayesian neural networks (BNNs) have recently become practical function approximators, with many benefits over standard GPs such as the ability to naturally handle non-stationarity and learn representations for high-dimensional data. In this paper, we study BNNs as alternatives to standard GP surrogates for optimization. We consider a variety of approximate inference procedures for finite-width BNNs, including high-quality Hamiltonian Monte Carlo, low-cost stochastic MCMC, and heuristics such as deep ensembles. We also consider infinite-width BNNs, linearized Laplace approximations, and partially stochastic models such as deep kernel learning. We evaluate this collection of surrogate models on diverse problems with varying dimensionality, number of objectives, non-stationarity, and discrete and continuous inputs. We find: (i) the ranking of methods is highly problem dependent, suggesting the need for tailored inductive biases; (ii) HMC is the most successful approximate inference procedure for fully stochastic BNNs; (iii) full stochasticity may be unnecessary as deep kernel learning is relatively competitive; (iv) deep ensembles perform relatively poorly; (v) infinite-width BNNs are particularly promising, especially in high dimensions.", "type": "Poster", @@ -1244,7 +1244,7 @@ "id": 19763, "title": "Amortizing intractable inference in large language models", "authors": [ - "Edward Hu", + "Edward J Hu", "Moksh Jain", "Eric Elmoznino", "Younesse Kaddar", @@ -1283,11 +1283,11 @@ "id": 19739, "title": "Gene Regulatory Network Inference in the Presence of Dropouts: a Causal View", "authors": [ - "HAOYUE DAI", + "Haoyue Dai", "Ignavier Ng", "Gongxu Luo", - "Petar Stojanov", "Peter Spirtes", + "Petar Stojanov", "Kun Zhang" ], "abstract": "Gene regulatory network inference (GRNI) is a challenging problem, particularly owing to the presence of zeros in single-cell RNA sequencing data: some are biological zeros representing no gene expression, while some others are technical zeros arising from the sequencing procedure (aka dropouts), which may bias GRNI by distorting the joint distribution of the measured gene expressions. Existing approaches typically handle dropout error via imputation, which may introduce spurious relations as the true joint distribution is generally unidentifiable. To tackle this issue, we introduce a causal graphical model to characterize the dropout mechanism, namely, Causal Dropout Model. We provide a simple yet effective theoretical result: interestingly, the conditional independence (CI) relations in the data with dropouts, after deleting the samples with zero values (regardless if technical or not) for the conditioned variables, are asymptotically identical to the CI relations in the original data without dropouts. This particular test-wise deletion procedure, in which we perform CI tests on the samples without zeros for the conditioned variables, can be seamlessly integrated with existing structure learning approaches including constraint-based and greedy score-based methods, thus giving rise to a principled framework for GRNI in the presence of dropouts. We further show that the causal dropout model can be validated from data, and many existing statistical models to handle dropouts fit into our model as specific parametric instances. Empirical evaluation on synthetic, curated, and real-world experimental transcriptomic data comprehensively demonstrate the efficacy of our method.", @@ -1308,7 +1308,7 @@ "Hongxin Zhang", "Qinhong Zhou", "Zhenfang Chen", - "David Cox", + "David Daniel Cox", "Yiming Yang", "Chuang Gan" ], @@ -1352,7 +1352,7 @@ "Raj Ghugare", "Santiago Miret", "Adriana Hugessen", - "mariano Phielipp", + "Mariano Phielipp", "Glen Berseth" ], "abstract": "Reinforcement learning (RL) over text representations can be effective for finding high-value policies that can search over graphs. However, RL requires careful structuring of the search space and algorithm design to be effective in this challenge. Through extensive experiments, we explore how different design choices for text grammar and algorithmic choices for training can affect an RL policy's ability to generate molecules with desired properties. We arrive at a new RL-based molecular design algorithm (ChemRLformer) and perform a thorough analysis using 25 molecule design tasks, including computationally complex protein docking simulations. From this analysis, we discover unique insights in this problem space and show that ChemRLformer achieves state-of-the-art performance while being more straightforward than prior work by demystifying which design choices are actually helpful for text-based molecule design.", @@ -1368,11 +1368,11 @@ "id": 18045, "title": "Learning Performance-Improving Code Edits", "authors": [ - "Alexander Shypula", + "Alexander G Shypula", "Aman Madaan", "Yimeng Zeng", "Uri Alon", - "Jacob Gardner", + "Jacob R. Gardner", "Yiming Yang", "Milad Hashemi", "Graham Neubig", @@ -1467,7 +1467,7 @@ "id": 19731, "title": "Unprocessing Seven Years of Algorithmic Fairness", "authors": [ - "Andr\u00e9 F. Cruz", + "Andr\u00e9 Cruz", "Moritz Hardt" ], "abstract": "Seven years ago, researchers proposed a postprocessing method to equalize the error rates of a model across different demographic groups. The work launched hundreds of papers purporting to improve over the postprocessing baseline. We empirically evaluate these claims through thousands of model evaluations on several tabular datasets. We find that the fairness-accuracy Pareto frontier achieved by postprocessing contains all other methods we were feasibly able to evaluate. In doing so, we address two common methodological errors that have confounded previous observations. One relates to the comparison of methods with different unconstrained base models. The other concerns methods achieving different levels of constraint relaxation. At the heart of our study is a simple idea we call unprocessing that roughly corresponds to the inverse of postprocessing. Unprocessing allows for a direct comparison of methods using different underlying models and levels of relaxation.", @@ -1498,7 +1498,7 @@ }, { "id": 19399, - "title": "Learning Reusable Dense Rewards for Multi-Stage Tasks", + "title": "DrS: Learning Reusable Dense Rewards for Multi-Stage Tasks", "authors": [ "Tongzhou Mu", "Minghua Liu", @@ -1517,7 +1517,7 @@ "id": 19027, "title": "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment", "authors": [ - "Siyao Li", + "Li Siyao", "Tianpei Gu", "Zhitao Yang", "Zhengyu Lin", @@ -1546,7 +1546,7 @@ "Junting Pan", "Hao Dong", "Yu Qiao", - "Gao Peng", + "Peng Gao", "Hongsheng Li" ], "abstract": "Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful promptable framework, revolutionizing the segmentation field. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under-explored, e.g., automatically segmenting your pet dog in numerous images. In this paper, we introduce a training-free Personalization approach for SAM, termed PerSAM. Given only one-shot data, i.e., a single image with a reference mask, we first obtain a positive-negative location prior for the target concept in new images. Then, aided by target visual semantics, we empower SAM for personalized object segmentation via two proposed techniques: target-guided attention and target-semantic prompting. In this way, we can effectively customize the general-purpose SAM for private use without any training. To further alleviate the ambiguity of segmentation scales, we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the entire SAM, we introduce a scale-aware fine-tuning to aggregate multi-scale masks, which only tunes 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new dataset, PerSeg, for the evaluation of personalized object segmentation, and also test our methods on various one-shot image and video segmentation benchmarks. Besides, we propose to leverage PerSAM to improve DreamBooth for personalized text-to-image synthesis. By mitigating the disturbance of training-set backgrounds, our approach showcases better target appearance generation and higher fidelity to the input text prompt.", @@ -1565,7 +1565,7 @@ "title": "Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training", "authors": [ "Shruthi Gowda", - "Bahram Yoosefizonooz", + "Bahram Zonooz", "Elahe Arani" ], "abstract": "Adversarial training improves the robustness of neural networks against adversarial attacks, albeit at the expense of the trade-off between standard and robust generalization.To unveil the underlying factors driving this phenomenon, we examine the layer-wise learning capabilities of neural networks during the transition from a standard to an adversarial setting. Our empirical findings demonstrate that selectively updating specific layers while preserving others can substantially enhance the network's learning capacity. We, therefore, propose CURE, a novel training framework that leverages a gradient prominence criterion to perform selective conservation, updating, and revision of weights. Importantly, CURE is designed to be dataset- and architecture-agnostic, ensuring its applicability across various scenarios. It effectively tackles both memorization and overfitting issues, thus enhancing the trade-off between robustness and generalization and additionally, this training approach also aids in mitigating \"robust overfitting\". Furthermore, our study provides valuable insights into the mechanisms of selective adversarial training and offers a promising avenue for future research.", @@ -1612,7 +1612,7 @@ }, { "id": 19762, - "title": "Learning Energy Decompositions for Partial Inference of GFlowNets", + "title": "Learning Energy Decompositions for Partial Inference in GFlowNets", "authors": [ "Hyosoon Jang", "Minsu Kim", @@ -1669,7 +1669,7 @@ "id": 19746, "title": "Latent Trajectory Learning for Limited Timestamps under Distribution Shift over Time", "authors": [ - "Qiuhao Zeng", + "QIUHAO Zeng", "Changjian Shui", "Long-Kai Huang", "Peng Liu", @@ -1725,9 +1725,9 @@ }, { "id": 19392, - "title": "Beyond Linear Spherical Interpolation: Noise Correction for Image Interpolation with Diffusion Models", + "title": "NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation", "authors": [ - "Pengfei Zheng", + "PengFei Zheng", "Yonggang Zhang", "Zhen Fang", "Tongliang Liu", @@ -1770,7 +1770,7 @@ "Minjun Sung", "Sambhu Harimanas Karumanchi", "Aditya Gahlawat", - "Naira HOVAKIMYAN" + "Naira Hovakimyan" ], "abstract": "We introduce $\\mathcal{L}_1$-MBRL, a control-theoretic augmentation scheme for Model-Based Reinforcement Learning (MBRL) algorithms. Unlike model-free approaches, MBRL algorithms learn a model of the transition function using data and use it to design a control input. Our approach generates an approximate control-affine model of the learned transition function according to the switching law. Using the approximate model, control input produced by the underlying MBRL is perturbed by the $\\mathcal{L}_1$ adaptive control, which is designed to enhance the robustness of the system against uncertainties. Importantly, this approach is agnostic to the choice of MBRL algorithm, which enables the utilization of the scheme in various MBRL algorithms. Our method exhibits superior performance and sample efficiency on multiple MuJoCo environments, both with and without system noise, as demonstrated through numerical simulations.", "type": "Poster", @@ -1806,7 +1806,7 @@ }, { "id": 19021, - "title": "Dictionary Contrastive Forward Learning via Adaptive Label Embeddings", + "title": "Dictionary Contrastive Learning for Efficient Local Supervision without Auxiliary Networks", "authors": [ "Suhwan Choi", "Myeongho Jeon", @@ -1869,7 +1869,7 @@ "title": "Orbit-Equivariant Graph Neural Networks", "authors": [ "Matthew Morris", - "Bernardo Grau", + "Bernardo Cuenca Grau", "Ian Horrocks" ], "abstract": "Equivariance is an important structural property that is captured by architectures such as graph neural networks (GNNs). However, equivariant graph functions cannot produce different outputs for similar nodes, which may be undesirable when the function is trying to optimize some global graph property. In this paper, we define orbit-equivariance, a relaxation of equivariance which allows for such functions whilst retaining important structural inductive biases. We situate the property in the hierarchy of graph functions, define a taxonomy of orbit-equivariant functions, and provide four different ways to achieve non-equivariant GNNs. For each, we analyze their expressivity with respect to orbit-equivariance and evaluate them on two novel datasets, one of which stems from a real-world use-case of designing optimal bioisosteres.", @@ -1890,8 +1890,8 @@ "Zangwei Zheng", "Jianyang Gu", "Xiangyu Peng", - "Zhaopan Xu", - "Zhou Daquan", + "xu Zhao Pan", + "Daquan Zhou", "Lei Shang", "Baigui Sun", "Xuansong Xie", @@ -1997,7 +1997,7 @@ "authors": [ "Yiding Jiang", "Christina Baek", - "Zico Kolter" + "J Zico Kolter" ], "abstract": "Learning features from data is one of the defining characteristics of deep learning,but our theoretical understanding of the role features play in deep learning is stillrudimentary. To address this gap, we introduce a new tool, the interaction tensor,for empirically analyzing the interaction between data and model through features.With the interaction tensor, we make several key observations about how featuresare distributed in data and how models with different random seeds learn differentfeatures. Based on these observations, we propose a conceptual framework for fea-ture learning. Under this framework, the expected accuracy for a single hypothesisand agreement for a pair of hypotheses can both be derived in closed-form. Wedemonstrate that the proposed framework can explain empirically observed phenomena, including the recently discovered Generalization Disagreement Equality(GDE) that allows for estimating the generalization error with only unlabeled data.Further, our theory also provides explicit construction of natural data distributionsthat break the GDE. Thus, we believe this work provides valuable new insight intoour understanding of feature learning.", "type": "Oral", @@ -2033,12 +2033,12 @@ "title": "Small-scale proxies for large-scale Transformer training instabilities", "authors": [ "Mitchell Wortsman", - "Peter Liu", + "Peter J Liu", "Lechao Xiao", - "Katie Everett", - "Alexander Alemi", + "Katie E Everett", + "Alexander A Alemi", "Ben Adlam", - "John Co-Reyes", + "John D Co-Reyes", "Izzeddin Gur", "Abhishek Kumar", "Roman Novak", @@ -2120,9 +2120,9 @@ }, { "id": 19771, - "title": "GraphGuard: Provably Robust Graph Classification against Adversarial Attacks", + "title": "GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations", "authors": [ - "Zaishuo Xia", + "zaishuo xia", "Han Yang", "Binghui Wang", "Jinyuan Jia" @@ -2138,11 +2138,11 @@ }, { "id": 19769, - "title": "Proving Test Set Contamination for Black-Box Language Models", + "title": "Proving Test Set Contamination in Black-Box Language Models", "authors": [ "Yonatan Oren", "Nicole Meister", - "Niladri Chatterji", + "Niladri S. Chatterji", "Faisal Ladhak", "Tatsunori Hashimoto" ], @@ -2162,7 +2162,7 @@ "Ahmad Faiz", "Sotaro Kaneda", "Ruhan Wang", - "Rita Osi", + "Rita Chukwunyere Osi", "Prateek Sharma", "Fan Chen", "Lei Jiang" @@ -2204,7 +2204,7 @@ "title": "Interpreting CLIP's Image Representation via Text-Based Decomposition", "authors": [ "Yossi Gandelsman", - "Alexei Efros", + "Alexei A Efros", "Jacob Steinhardt" ], "abstract": "We investigate the CLIP image encoder by analyzing how individual model components affect the final representation. We decompose the image representation as a sum across individual image patches, model layers, and attention heads, and use CLIP's text representation to interpret the summands. Interpreting the attention heads, we characterize each head's role by automatically finding text representations that span its output space, which reveals property-specific roles for many heads (e.g.~location or shape). Next, interpreting the image patches, we uncover an emergent spatial localization within CLIP. Finally, we use this understanding to remove spurious features from CLIP and to create a strong zero-shot image segmenter. Our results indicate that scalable understanding of transformer models is attainable and can be used to repair and improve models.", @@ -2221,9 +2221,9 @@ "title": "Horizon-Free Regret for Linear Markov Decision Processes", "authors": [ "Zhang Zihan", - "Jason Lee", + "Jason D. Lee", "Yuxin Chen", - "Simon Du" + "Simon Shaolei Du" ], "abstract": "A recent line of works showed regret bounds in reinforcement learning (RL) can be (nearly) independent of planning horizon, a.k.a. the horizon-free bounds. However, these regret bounds only apply to settings where a polynomial dependency on the size of transition model is allowed, such as tabular Markov Decision Process (MDP) and linear mixture MDP. We give the first horizon-free bound for the popular linear MDP setting where the size of the transition model can be exponentially large or even uncountable. In contrast to prior works which explicitly estimate the transition model and compute the inhomogeneous value functions at different time steps, we directly estimate the value functions and confidence sets. We obtain the horizon-free bound by: (1) maintaining multiple weighted least square estimators for the value functions; and (2) a structural lemma which shows the maximal total variation of the inhomogeneous value functions is bounded by a polynomial factor of the feature dimension.", "type": "Poster", @@ -2243,7 +2243,7 @@ "Laura E. Brandt", "Axel Feldmann", "Zhoutong Zhang", - "William Freeman" + "William T. Freeman" ], "abstract": "Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime. However, these features often lack the spatial resolution to directly perform dense prediction tasks like segmentation and depth prediction because models aggressively pool information over large areas. In this work, we introduce FeatUp, a task- and model-agnostic framework to restore lost spatial information in deep features. We introduce two variants of FeatUp: one that guides features with high-resolution signal in a single forward pass, and one that fits an implicit model to a single image to reconstruct features at any resolution. Both approaches use a multi-view consistency loss with deep analogies to NeRFs. Our features retain their original semantics and can be swapped into existing applications to yield resolution and performance gains even without re-training. We show that FeatUp significantly outperforms other feature upsampling and image super-resolution approaches in class activation map generation, transfer learning for segmentation and depth prediction, and end-to-end training for semantic segmentation.", "type": "Poster", @@ -2304,8 +2304,8 @@ "authors": [ "Gabriel Cardoso", "Yazid Janati el idrissi", - "Eric Moulines", - "Sylvain Le Corff" + "Sylvain Le Corff", + "Eric Moulines" ], "abstract": "Ill-posed linear inverse problems arise frequently in various applications, from computational photography to medical imaging.A recent line of research exploits Bayesian inference with informative priors to handle the ill-posedness of such problems.Amongst such priors, score-based generative models (SGM) have recently been successfully applied to several different inverse problems.In this study, we exploit the particular structure of the prior defined by the SGM to define a sequence of intermediate linear inverse problems. As the noise level decreases, the posteriors of these inverse problems get closer to the target posterior of the original inverse problem. To sample from this sequence of posteriors, we propose the use of Sequential Monte Carlo (SMC) methods.The proposed algorithm, \\algo, is shown to be theoretically grounded and we provide numerical simulations showing that it outperforms competing baselines when dealing with ill-posed inverse problems in a Bayesian setting.", "type": "Oral", @@ -2371,14 +2371,14 @@ "id": 19727, "title": "Graph Neural Networks for Learning Equivariant Representations of Neural Networks", "authors": [ - "Miltiadis (Miltos) Kofinas", + "Miltiadis Kofinas", "Boris Knyazev", "Yan Zhang", "Yunlu Chen", - "Gertjan J Burghouts", + "Gertjan J. Burghouts", "Efstratios Gavves", - "Cees G Snoek", - "David Zhang" + "Cees G. M. Snoek", + "David W. Zhang" ], "abstract": "Neural networks that process the parameters of other neural networks find applications in domains as diverse as classifying implicit neural representations, generating neural network weights, and predicting generalization errors.However, existing approaches either overlook the inherent permutation symmetry in the neural network or rely on intricate weight-sharing patterns to achieve equivariance, while ignoring the impact of the network architecture itself.In this work, we propose to represent neural networks as computational graphs of parameters, which allows us to harness powerful graph neural networks and transformers that preserve permutation symmetry.Consequently, our approach enables a single model to encode neural computational graphs with diverse architectures.We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations, predicting generalization performance, and learning to optimize, while consistently outperforming state-of-the-art methods.", "type": "Oral", @@ -2419,7 +2419,7 @@ "Shunyu Yao", "Kexin Pei", "Ofir Press", - "Karthik Narasimhan" + "Karthik R Narasimhan" ], "abstract": "Language models (LMs) have been improving rapidly, and today we lack benchmarks that are hard to solve but easy to evaluate. Coding is such a desired task, but existing coding benchmarks only feature self-contained problems solvable within tens of lines. Inspired by how real-world programmers code to fix bugs or ship new features, we introduce SWE-bench, a benchmark with 2,294 GitHub issues sourced from 12 popular Python repositories. Given a codebase and an issue description, an LM is tasked with editing the codebase to resolve the issue and pass all related tests. Our experiments show that both state-of-the-art proprietary LMs and our fine-tuned LM, SWE-Llama, can resolve only the simplest issues. For example, Claude 2 and GPT-4 solve a mere 3.6% and 1.3% of tasks respectively, even when provided with an oracle retriever. Through systematic analysis, we identify various factors underlying LM performances, such as the retrieval setup, codebase size, and issue complexity. We also identify key challenges for LMs to solve real-world software engineering problems, including understanding cross-file dependencies, localizing edit locations, and generating long and well-formatted patch files. SWE-bench shows that real-world software engineering is a diverse, challenging and sustainable testbed for evaluating a wide range of language model abilities.", "type": "Oral", @@ -2435,10 +2435,10 @@ "title": "ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation", "authors": [ "Kim-Celine Kahl", - "Carsten L\u00fcth", + "Carsten T. L\u00fcth", "Maximilian Zenk", "Klaus Maier-Hein", - "Paul F. Jaeger" + "Paul F Jaeger" ], "abstract": "Uncertainty estimation is an essential and heavily-studied component for the reliable application of semantic segmentation methods. While various studies exist claiming methodological advances on the one hand, and successful application on the other hand, the field is currently hampered by a gap between theory and practice leaving fundamental questions unanswered: Can data-related and model-related uncertainty really be separated in practice? Which components of an uncertainty method are essential for real-world performance? Which uncertainty method works well for which application? In this work, we link this research gap to a lack of systematic and comprehensive evaluation of uncertainty methods. Specifically, we identify three key pitfalls in current literature and present an evaluation framework that bridges the research gap by providing 1) a controlled environment for studying data ambiguities as well as distribution shifts, 2) systematic ablations of relevant method components, and 3) test-beds for the five predominant uncertainty applications: OoD-detection, active learning, failure detection, calibration, and ambiguity modeling. Empirical results on simulated as well as real-world data demonstrate how the proposed framework is able to answer the predominant questions in the field revealing for instance that 1) separation of uncertainty types works on simulated data but does not necessarily translate to real-world data, 2) aggregation of scores is a crucial but currently neglected component of uncertainty methods, 3) While ensembles are performing most robustly across the different downstream tasks and settings, test-time augmentation often constitutes a light-weight alternative. (Code will be released upon acceptance)", "type": "Oral", @@ -2517,7 +2517,7 @@ "id": 19749, "title": "The mechanistic basis of data dependence and abrupt learning in an in-context classification task", "authors": [ - "Gautam Reddy Nallamala" + "Gautam Reddy" ], "abstract": "Transformer models exhibit in-context learning: the ability to accurately predict the response to a novel query based on illustrative examples in the input sequence, which contrasts with traditional in-weights learning of query-output relationships. What aspects of the training data distribution and architecture favor in-context vs in-weights learning? Recent work has shown that specific distributional properties inherent in language, such as burstiness, large dictionaries and skewed rank-frequency distributions, control the trade-off or simultaneous appearance of these two forms of learning. We first show that these results are recapitulated in a minimal attention-only network trained on a simplified dataset. In-context learning (ICL) is driven by the abrupt emergence of an induction head, which subsequently competes with in-weights learning. By identifying progress measures that precede in-context learning and targeted experiments, we construct a two-parameter model of an induction head which emulates the full data distributional dependencies displayed by the attention-based network. A phenomenological model of induction head formation traces its abrupt emergence to the sequential learning of three nested logits enabled by an intrinsic curriculum. We propose that the sharp transitions in attention-based networks arise due to a specific chain of multi-layer operations necessary to achieve ICL, which is implemented by nested nonlinearities sequentially learned during training.", "type": "Oral", @@ -2578,8 +2578,8 @@ "Yuliang Xiu", "Weiyang Liu", "Liam Paull", - "Michael J Black", - "Bernhard Schoelkopf" + "Michael J. Black", + "Bernhard Sch\u00f6lkopf" ], "abstract": "The creation of photorealistic virtual worlds requires the accurate modeling of 3D surface geometry for a wide range of objects. For this, meshes are appealing since they enable 1) fast physics-based rendering with realistic material and lighting, 2) physical simulation, and 3) are memory-efficient for modern graphics pipelines. Recent work on reconstructing and statistically modeling 3D shape, however, has critiqued meshes as being topologically inflexible. To capture a wide range of object shapes, any 3D representation must be able to model solid, watertight, shapes as well as thin, open, surfaces. Recent work has focused on the former, and methods for reconstructing open surfaces do not support fast reconstruction with material and lighting or unconditional generative modelling. Inspired by the observation that open surfaces can be seen as islands floating on watertight surfaces, we parametrize open surfaces by defining a manifold signed distance field on watertight templates. With this parametrization, we further develop a grid-based and differentiable representation that parametrizes both watertight and non-watertight meshes of arbitrary topology. Our new representation, called Ghost-on-the-Shell (G-Shell), enables two important applications: differentiable rasterization-based reconstruction from multiview images and generative modelling of non-watertight meshes. We empirically demonstrate that G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks, while also performing effectively for watertight meshes.", "type": "Oral", @@ -2597,7 +2597,7 @@ "Akari Asai", "Zeqiu Wu", "Yizhong Wang", - "Avi Sil", + "Avirup Sil", "Hannaneh Hajishirzi" ], "abstract": "Retrieval-Augmented Generation (RAG), an ad hoc approach that augments Language Models (LMs) with retrieval, decreases hallucination issues of large LMs. However, indiscriminately retrieving and incorporating a fixed number of retrieved passages, regardless of whether retrieval is necessary, or passages are relevant, diminishes LM versatility or can lead to unhelpful response generation.In this work, we introduce a new framework called **Self-Reflective Retrieval-Augmented Generation (Self-RAG)** that enhances an LM's quality and factuality through retrieval and self-reflection. Our framework trains a single arbitrary LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its own generations using special tokens, called *reflection* tokens. Generating reflection tokens makes the LM controllable during the inference phase, enabling it to tailor its behavior to diverse task requirements. Experiments show that Self-RAG (7B and 13B parameters) significantly outperforms state-of-the-art LLMs and retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG outperforms ChatGPT and retrieval-augmented Llama2-chat on multiple tasks including Open-domain QA and fact verification, and it shows significant gains in factuality scores and citation accuracy for long-form generations relative to these models.", @@ -2655,7 +2655,7 @@ "authors": [ "Izzeddin Gur", "Hiroki Furuta", - "Austin Huang", + "Austin V Huang", "Mustafa Safdari", "Yutaka Matsuo", "Douglas Eck", @@ -2672,7 +2672,7 @@ }, { "id": 19732, - "title": "ASID: Active Exploration for System Identification and Reconstruction in Robotic Manipulation", + "title": "ASID: Active Exploration for System Identification in Robotic Manipulation", "authors": [ "Marius Memmel", "Andrew Wagenmaker", @@ -2694,7 +2694,7 @@ "title": "Predictive auxiliary objectives in deep RL mimic learning in the brain", "authors": [ "Ching Fang", - "Kimberly Stachenfeld" + "Kim Stachenfeld" ], "abstract": "The ability to predict upcoming events has been hypothesized to comprise a key aspect of natural and machine cognition. This is supported by trends in deep reinforcement learning (RL), where self-supervised auxiliary objectives such as prediction are widely used to support representation learning and improve task performance. Here, we study the effects predictive auxiliary objectives have on representation learning across different modules of an RL system and how these mimic representational changes observed in the brain. We find that predictive objectives improve and stabilize learning particularly in resource-limited architectures, and we identify settings where longer predictive horizons better support representational transfer. Furthermore, we find that representational changes in this RL system bear a striking resemblance to changes in neural activity observed in the brain across various experiments. Specifically, we draw a connection between the auxiliary predictive model of the RL system and hippocampus, an area thought to learn a predictive model to support memory-guided behavior. We also connect the encoder network and the value learning network of the RL system to visual cortex and striatum in the brain, respectively. This work demonstrates how representation learning in deep RL systems can provide an interpretable framework for modeling multi-region interactions in the brain. The deep RL perspective taken here also suggests an additional role of the hippocampus in the brain-- that of an auxiliary learning system that benefits representation learning in other regions.", "type": "Oral", @@ -2713,7 +2713,7 @@ "Jiatao Gu", "Laurent Dinh", "Evangelos Theodorou", - "Joshua Susskind", + "Joshua M. Susskind", "Shuangfei Zhai" ], "abstract": "Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs. DMs work by constructing a Stochastic Differential Equation (SDE) in the input space (ie, position space), and using a neural network to reverse it. In this work, we introduce a novel generative modeling framework grounded in \\textbf{phase space dynamics}, where a phase space is defined as {an augmented space encompassing both position and velocity.} Leveraging insights from Stochastic Optimal Control, we construct a path measure in the phase space that enables efficient sampling. {In contrast to DMs, our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.} This early prediction sets the stage for efficient data generation by leveraging additional velocity information along the trajectory. On standard image generation benchmarks, our model yields favorable performance over baselines in the regime of small Number of Function Evaluations (NFEs). Furthermore, our approach rivals the performance of diffusion models equipped with efficient sampling techniques, underscoring its potential as a new tool generative modeling.", @@ -2747,7 +2747,7 @@ "id": 18596, "title": "Incremental Randomized Smoothing Certification", "authors": [ - "Shubham Dipak Ugare", + "Shubham Ugare", "Tarun Suresh", "Debangshu Banerjee", "Gagandeep Singh", @@ -2808,7 +2808,7 @@ "Boris Bonev", "Gennady Pekhimenko", "Kamyar Azizzadenesheli", - "anima anandkumar" + "Anima Anandkumar" ], "abstract": "Neural operators, such as Fourier Neural Operators (FNO), form a principled approach for learning solution operators for partial differential equations (PDE) and other mappings between function spaces. However, many real-world problems require high-resolution training data, and the training time and limited GPU memory pose big barriers. One solution is to train neural operators in mixed precision to reduce the memory requirement and increase training speed. However, existing mixed-precision training techniques are designed for standard neural networks, and we find that their direct application to FNO leads to numerical overflow and poor memory efficiency. Further, at first glance, it may appear that mixed precision in FNO will lead to drastic accuracy degradation since reducing the precision of the Fourier transform yields poor results in classical numerical solvers. We show that this is not the case; in fact, we prove that reducing the precision in FNO still guarantees a good approximation bound, when done in a targeted manner. Specifically, we build on the intuition that neural operator learning inherently induces an approximation error, arising from discretizing the infinite-dimensional ground-truth input function, implying that training in full precision is not needed. We formalize this intuition by rigorously characterizing the approximation and precision errors of FNO and bounding these errors for general input functions. We prove that the precision error is asymptotically comparable to the approximation error. Based on this, we design a simple method to optimize the memory-intensive half-precision tensor contractions by greedily finding the optimal contraction order. Through extensive experiments on different state-of-the-art neural operators, datasets, and GPUs, we demonstrate that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.", "type": "Poster", @@ -2845,7 +2845,7 @@ "Pan Xu", "A. Rupam Mahmood", "Doina Precup", - "anima anandkumar", + "Anima Anandkumar", "Kamyar Azizzadenesheli" ], "abstract": "We present a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning (RL). One of the key shortcomings of existing Thompson sampling algorithms is the need to perform a Gaussian approximation of the posterior distribution, which is not a good surrogate in most practical settings. We instead directly sample the Q-function from its posterior distribution, by using Langevin Monte Carlo, an efficient type of Markov Chain Monte Carlo (MCMC) method. Our method only needs to perform noisy gradient descent updates to learn the exact posterior distribution of the Q-function, which makes our approach easy to deploy in deep RL. Our theoretical analysis shows that, in the linear Markov decision process (linear MDP) setting, the proposed method has a regret bound of $\\tilde{O}(d^{3/2}H^{5/2}\\sqrt{T})$, where $d$ is the dimension of the feature mapping, $H$ is the planning horizon, and $T$ is the total number of steps. We apply the proposed approach to deep RL, by using the Adam optimizer to perform gradient updates. Our approach achieves better or similar results compared with state-of-the-art deep RL algorithms on several challenging exploration tasks from the Atari57 suite.", @@ -2918,7 +2918,7 @@ "authors": [ "Defu Cao", "Furong Jia", - "Sercan Arik", + "Sercan O Arik", "Tomas Pfister", "Yixiang Zheng", "Wen Ye", @@ -2935,7 +2935,7 @@ }, { "id": 18594, - "title": "OpenNerf: Open Set 3D Neural Scene Segmentation with Pixel-Wise Features and Rendered Novel Views", + "title": "OpenNeRF: Open Set 3D Neural Scene Segmentation with Pixel-Wise Features and Rendered Novel Views", "authors": [ "Francis Engelmann", "Fabian Manhardt", @@ -2962,8 +2962,8 @@ "Robert Sim", "Subhabrata Mukherjee", "Victor R\u00fchle", - "Laks Lakshmanan", - "Ahmed H Awadallah" + "Laks V. S. Lakshmanan", + "Ahmed Hassan Awadallah" ], "abstract": "Large language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size, while smaller models that can be deployed on lower cost (e.g., edge) devices, tend to lag behind in terms of response quality. Therefore in this work we propose a hybrid inference approach which combines their respective strengths to save cost and maintain quality. Our approach uses a router that assigns queries to the small or large model based on the predicted query difficulty and the desired quality level. The desired quality level can be tuned dynamically at test time to seamlessly trade quality for cost as per the scenario requirements. In experiments our approach allows us to make up to 40% fewer calls to the large model, with no drop in response quality.", "type": "Poster", @@ -3011,7 +3011,7 @@ }, { "id": 19622, - "title": "AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework", + "title": "AutoLoRa: An Automated Robust Fine-Tuning Framework", "authors": [ "Xilie Xu", "Jingfeng Zhang", @@ -3050,7 +3050,7 @@ "Edwin Zhang", "Yujie Lu", "Shinda Huang", - "William Wang", + "William Yang Wang", "Amy Zhang" ], "abstract": "Training generalist agents is difficult across several axes, requiring us to deal with high-dimensional inputs (space), long horizons (time), and generalization to novel tasks. Recent advances with architectures have allowed for improved scaling along one or two of these axes, but are still computationally prohibitive to use. In this paper, we propose to address all three axes by leveraging Language to Control Diffusion models as a hierarchical planner conditioned on language (LCD). We effectively and efficiently scale diffusion models for planning in extended temporal, state, and task dimensions to tackle long horizon control problems conditioned on natural language instructions, as a step towards generalist agents. Comparing LCD with other state-of-the-art models on the CALVIN language benchmark finds that LCD outperforms other SOTA methods in multi-task success rates, whilst improving inference speed over other comparable diffusion models by 3.3x~15x. We show that LCD can successfully leverage the unique strength of diffusion models to produce coherent long range plans while addressing their weakness in generating low-level details and control.", @@ -3066,7 +3066,7 @@ "id": 19497, "title": "Towards Principled Representation Learning from Videos for Reinforcement Learning", "authors": [ - "Dipendra Kumar Misra", + "Dipendra Misra", "Akanksha Saran", "Tengyang Xie", "Alex Lamb", @@ -3095,7 +3095,7 @@ "Jatin Chauhan", "Olaf Wiest", "Olexandr Isayev", - "Connor Coley", + "Connor W. Coley", "Yizhou Sun", "Wei Wang" ], @@ -3114,8 +3114,8 @@ "authors": [ "Claudio Battiloro", "Indro Spinelli", - "Lev Telyatinkov", - "Michael Bronstein", + "Lev Telyatnikov", + "Michael M. Bronstein", "Simone Scardapane", "Paolo Di Lorenzo" ], @@ -3199,9 +3199,9 @@ "authors": [ "Xiang Fu", "Tian Xie", - "Andrew Rosen", - "Tommi Jaakkola", - "Jake Smith" + "Andrew Scott Rosen", + "Tommi S. Jaakkola", + "Jake Allen Smith" ], "abstract": "Metal-organic frameworks (MOFs) are of immense interest in applications such as gas storage and carbon capture due to their exceptional porosity and tunable chemistry. Their modular nature has enabled the use of template-based methods to generate hypothetical MOFs by combining molecular building blocks in accordance with known network topologies. However, the ability of these methods to identify top-performing MOFs is often hindered by the limited diversity of the resulting chemical space. In this work, we propose MOFDiff: a coarse-grained (CG) diffusion model that generates CG MOF structures through a denoising diffusion process over the coordinates and identities of the building blocks. The all-atom MOF structure is then determined through a novel assembly algorithm. As the diffusion model generates 3D MOF structures by predicting scores in E(3), we employ equivariant graph neural networks that respect the permutational and roto-translational symmetries. We comprehensively evaluate our model's capability to generate valid and novel MOF structures and its effectiveness in designing outstanding MOF materials for carbon capture applications with molecular simulations.", "type": "Poster", @@ -3262,7 +3262,7 @@ "John Kirchenbauer", "Hong-Min Chu", "Gowthami Somepalli", - "Brian Bartoldson", + "Brian R. Bartoldson", "Bhavya Kailkhura", "Avi Schwarzschild", "Aniruddha Saha", @@ -3448,8 +3448,8 @@ "Xiang Cheng", "Minhak Song", "Chulhee Yun", - "Suvrit Sra", - "Ali Jadbabaie" + "Ali Jadbabaie", + "Suvrit Sra" ], "abstract": "Transformer training is notoriously difficult, requiring a careful design of optimizers and use of various heuristics. We make progress towards understanding the subtleties of training Transformers by carefully studying a simple yet canonical linearized *shallow* Transformer model. Specifically, we train linear Transformers to solve regression tasks, inspired by J. von Oswald et al. (ICML 2023), and K. Ahn et al. (NeurIPS 2023). Most importantly, we observe that our proposed linearized models can reproduce several prominent aspects of Transformer training dynamics. Consequently, the results obtained in this paper suggest that a simple linearized Transformer model could actually be a valuable, realistic abstraction for understanding Transformer optimization.", "type": "Poster", @@ -3511,7 +3511,7 @@ }, { "id": 19599, - "title": "Feature Learning in Infinite Depth Neural Networks", + "title": "Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks", "authors": [ "Greg Yang", "Dingli Yu", @@ -3716,7 +3716,7 @@ "title": "Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips", "authors": [ "Man Yao", - "Jiakui Hu", + "JiaKui Hu", "Tianxiang Hu", "Yifan Xu", "Zhaokun Zhou", @@ -3917,7 +3917,7 @@ "Xiwen Zhang", "Jianzhu Ma", "Jian Peng", - "Qiang Liu" + "qiang liu" ], "abstract": "Diffusion models have revolutionized text-to-image generation with its exceptional quality and creativity. However, its multi-step sampling process is known to be slow, often requiring tens of inference steps to obtain satisfactory results. Previous attempts to improve its sampling speed and reduce computational costs through distillation have been unsuccessful in achieving a functional one-step model.In this paper, we explore a recent method called Rectified Flow, which, thus far, has only been applied to small datasets. The core of Rectified Flow lies in its \\emph{reflow} procedure, which straightens the trajectories of probability flows, refines the coupling between noises and images, and facilitates the distillation process with student models. We propose a novel text-conditioned pipeline to turn Stable Diffusion (SD) into an ultra-fast one-step model, in which we find reflow plays a critical role in improving the assignment between noise and images. Leveraging our new pipeline, we create, to the best of our knowledge, the first one-step diffusion-based text-to-image generator with SD-level image quality, achieving an FID (Fr\u00e9chet Inception Distance) of $23.3$ on MS COCO 2017-5k, surpassing the previous state-of-the-art technique, progressive distillation, by a significant margin ($37.2$ $\\rightarrow$ $23.3$ in FID). By utilizing an expanded network with 1.7B parameters, we further improve the FID to $22.4$. We call our one-step models \\emph{InstaFlow}. On MS COCO 2014-30k, InstaFlow yields an FID of $13.1$ in just $0.09$ second, the best in $\\leq 0.1$ second regime, outperforming the recent StyleGAN-T ($13.9$ in $0.1$ second). Notably, the training of InstaFlow only costs 199 A100 GPU days.", "type": "Poster", @@ -3954,9 +3954,9 @@ }, { "id": 19573, - "title": "Unsupervised Fact Verification by Language Model Distillation", + "title": "Unsupervised Pretraining for Fact Verification by Language Model Distillation", "authors": [ - "Adrian Bazaga", + "Adri\u00e1n Bazaga", "Pietro Lio", "Gos Micklem" ], @@ -4032,7 +4032,7 @@ "title": "MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models", "authors": [ "Deyao Zhu", - "jun chen", + "Jun Chen", "Xiaoqian Shen", "Xiang Li", "Mohamed Elhoseiny" @@ -4080,7 +4080,7 @@ "Jingbo Wang", "Jinkun Cao", "Wenwei Zhang", - "Bo DAI", + "Bo Dai", "Dahua Lin", "Jiangmiao Pang" ], @@ -4120,7 +4120,7 @@ "Kevin Clark", "Paul Vicol", "Kevin Swersky", - "David Fleet" + "David J. Fleet" ], "abstract": "We present Direct Reward Fine-Tuning (DRaFT), a simple and effective method for fine-tuning diffusion models to maximize differentiable reward functions, such as scores from human preference models. We first show that it is possible to backpropagate the reward gradient through the full sampling procedure, and that doing so achieves strong performance on a variety of reward functions, outperforming reinforcement learning-based approaches. We then propose more efficient variants of DRaFT: DRaFT-K, which truncates backpropagation to only the last K steps of sampling, and DRaFT-LV, which obtains lower-variance gradient estimates for the case when K=1. We show that our methods work well for a variety of reward functions and can be used to substantially improve the aesthetic quality of images generated by Stable Diffusion 1.4. Finally, we draw connections between our approach and prior work, providing a unifying perspective on the design space of gradient-based fine-tuning algorithms.", "type": "Poster", @@ -4139,8 +4139,8 @@ "Kai Zhang", "Jian Xie", "Yuxuan Sun", - "Jihyun Ahn", - "Hanzi XU", + "Janice Ahn", + "Hanzi Xu", "Yu Su", "Wenpeng Yin" ], @@ -4157,7 +4157,7 @@ "id": 18019, "title": "Jointly-Learned Exit and Inference for a Dynamic Neural Network", "authors": [ - "Florence Regol", + "florence regol", "Joud Chataoui", "Mark Coates" ], @@ -4198,7 +4198,7 @@ "title": "RETSim: Resilient and Efficient Text Similarity", "authors": [ "Marina Zhang", - "Owen Vallis", + "Owen Skipper Vallis", "Aysegul Bumin", "Tanay Vakharia", "Elie Bursztein" @@ -4219,7 +4219,7 @@ "title": "Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models", "authors": [ "Hyeonho Jeong", - "Jong Ye" + "Jong Chul Ye" ], "abstract": "This paper introduces a novel grounding-guided video-to-video translation framework called Ground-A-Video for multi-attribute video editing.Recent endeavors in video editing have showcased promising results in single-attribute editing or style transfer tasks, either by training T2V models on text-video data or adopting training-free methods.However, when confronted with the complexities of multi-attribute editing scenarios, they exhibit shortcomings such as omitting or overlooking intended attribute changes, modifying the wrong elements of the input video, and failing to preserve regions of the input video that should remain intact.Ground-A-Video attains temporally consistent multi-attribute editing of input videos in a training-free manner without aforementioned shortcomings.Central to our method is the introduction of cross-frame gated attention which incorporates groundings information into the latent representations in a temporally consistent fashion, along with Modulated Cross-Attention and optical flow guided inverted latents smoothing.Extensive experiments and applications demonstrate that Ground-A-Video's zero-shot capacity outperforms other baseline methods in terms of edit-accuracy and frame consistency.Further results and code are available at our project page ( http://ground-a-video.github.io )", "type": "Poster", @@ -4236,7 +4236,7 @@ "authors": [ "Yiheng Du", "Nithin Chalapathi", - "Aditi Krishnapriyan" + "Aditi S. Krishnapriyan" ], "abstract": "We present Neural Spectral Methods, a technique to solve parametric Partial Differential Equations (PDEs), grounded in classical spectral methods. Our method uses orthogonal bases to learn PDE solutions as mappings between spectral coefficients. In contrast to current machine learning approaches which enforce PDE constraints by minimizing the numerical quadrature of the residuals in the spatiotemporal domain, we leverage Parseval's identity and introduce a new training strategy through a spectral loss. Our spectral loss enables more efficient differentiation through the neural network, and substantially reduces training complexity. At inference time, the computational cost of our method remains constant, regardless of the spatiotemporal resolution of the domain. Our experimental results demonstrate that our method significantly outperforms previous machine learning approaches in terms of speed and accuracy by one to two orders of magnitude on multiple different problems, including reaction-diffusion, and forced and unforced Navier-Stokes equations. When compared to numerical solvers of the same accuracy, our method demonstrates a $10\\times$ increase in performance speed.", "type": "Poster", @@ -4348,7 +4348,7 @@ "title": "Pooling Image Datasets with Multiple Covariate Shift and Imbalance", "authors": [ "Sotirios Panagiotis Chytas", - "Vishnu Lokhande", + "Vishnu Suresh Lokhande", "Vikas Singh" ], "abstract": "Small sample sizes are common in many disciplines, which necessitates pooling roughly similar datasets across multiple sites/institutions to study weak but relevant associations between images and disease incidence. Such data often manifest shifts and imbalances in covariates (secondary non-imaging data). These issues are well-studied for classical models, but the ideas simply do not apply to overparameterized DNN models. Consequently, recent work has shown how strategies from fairness and invariant representation learning provides a meaningful starting point, but the current repertoire of methods remains limited to accounting for shifts/imbalances in just a couple of covariates at a time. In this paper, we show how viewing this problem from the perspective of Category theory provides a simple and effective solution that completely avoids elaborate multi-stage training pipelines that would otherwise be needed. We show the effectiveness of this approach via extensive experiments on real datasets. Further, we discuss how our style of formulation offers a unified perspective on at least 5+ distinct problem settings in vision, from self-supervised learningto matching problems in 3D reconstruction.", @@ -4417,7 +4417,7 @@ }, { "id": 19552, - "title": "More Context, Less Distraction: Zero-shot Visual Classification by Inferring and Conditioning on Contextual Attributes", + "title": "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts", "authors": [ "Bang An", "Sicheng Zhu", @@ -4501,7 +4501,7 @@ "Giovanni Palla", "Niki Kilbertus", "Zeynep Akata", - "Fabian Theis" + "Fabian J Theis" ], "abstract": "In optimal transport (OT), a Monge map is known as a mapping that transports a source distribution to a target distribution in the most cost-efficient way. Recently, multiple neural estimators for Monge maps have been developed and applied in diverse unpaired domain translation tasks, e.g. in single-cell biology and computer vision. However, the classic OT framework enforces mass conservation, whichmakes it prone to outliers and limits its applicability in real-world scenarios. The latter can be particularly harmful in OT domain translation tasks, where the relative position of a sample within a distribution is explicitly taken into account. While unbalanced OT tackles this challenge in the discrete setting, its integration into neural Monge map estimators has received limited attention. We propose a theoreticallygrounded method to incorporate unbalancedness into any Monge map estimator. We improve existing estimators to model cell trajectories over time and to predict cellular responses to perturbations. Moreover, our approach seamlessly integrates with the OT flow matching (OT-FM) framework. While we show that OT-FM performs competitively in image translation, we further improve performance byincorporating unbalancedness (UOT-FM), which better preserves relevant features. We hence establish UOT-FM as a principled method for unpaired image translation.", "type": "Poster", @@ -4548,7 +4548,7 @@ "id": 17919, "title": "Bounding Box Stability against Feature Dropout Reflects Detector Generalization across Environments", "authors": [ - "Yang2 Yang", + "Yang Yang", "Wenhai Wang", "Zhe Chen", "Jifeng Dai", @@ -4608,7 +4608,7 @@ "Yubo Zhuang", "Xiaohui Chen", "Yun Yang", - "Richard Zhang" + "Richard Y. Zhang" ], "abstract": "$K$-means clustering is a widely used machine learning method for identifying patterns in large datasets. Semidefinite programming (SDP) relaxations have recently been proposed for solving the $K$-means optimization problem that enjoy strong statistical optimality guarantees, but the prohibitive cost of implementing an SDP solver renders these guarantees inaccessible to practical datasets. By contrast, nonnegative matrix factorization (NMF) is a simple clustering algorithm that is widely used by machine learning practitioners, but without a solid statistical underpinning nor rigorous guarantees. In this paper, we describe an NMF-like algorithm that works by solving a \\emph{nonnegative} low-rank restriction of the SDP relaxed $K$-means formulation using a nonconvex Burer--Monteiro factorization approach. The resulting algorithm is just as simple and scalable as state-of-the-art NMF algorithms, while also enjoying the same strong statistical optimality guarantees as the SDP. In our experiments, we observe that our algorithm achieves substantially smaller mis-clustering errors compared to the existing state-of-the-art.", "type": "Oral", @@ -4664,7 +4664,7 @@ "authors": [ "Aidan Scannell", "Riccardo Mereu", - "Paul Chang", + "Paul Edmund Chang", "Ella Tamir", "Joni Pajarinen", "Arno Solin" @@ -4700,10 +4700,10 @@ "id": 19540, "title": "BroGNet: Momentum-Conserving Graph Neural Stochastic Differential Equation for Learning Brownian Dynamics", "authors": [ - "Suresh Suresh", + "Suresh Bishnoi", "Jayadeva Jayadeva", "Sayan Ranu", - "N. M. Anoop Krishnan" + "N M Anoop Krishnan" ], "abstract": "Neural networks (NNs) that exploit strong inductive biases based on physical laws and symmetries have shown remarkable success in learning the dynamics of physical systems directly from their trajectory. However, these works focus only on the systems that follow deterministic dynamics, such as Newtonian or Hamiltonian. Here, we propose a framework, namely Brownian graph neural networks (BroGNet), combining stochastic differential equations (SDEs) and GNNs to learn Brownian dynamics directly from the trajectory. We modify the architecture of BroGNet to enforce linear momentum conservation of the system, which, in turn, provides superior performance on learning dynamics as revealed empirically. We demonstrate this approach on several systems, namely, linear spring, linear spring with binary particle types, and non-linear spring systems, all following Brownian dynamics at finite temperatures. We show that BroGNet significantly outperforms proposed baselines across all the benchmarked Brownian systems. In addition, we demonstrate zero-shot generalizability of BroGNet to simulate unseen system sizes that are two orders of magnitude larger and to different temperatures than those used during training. Finally, we show that BroGNet conserves the momentum of the system resulting in superior performance and data efficiency. Altogether, our study contributes to advancing the understanding of the intricate dynamics of Brownian motion and demonstrates the effectiveness of graph neural networks in modeling such complex systems.", "type": "Poster", @@ -4760,7 +4760,7 @@ }, { "id": 19536, - "title": "Light-MILPopt: Solving Large-scale Mixed Integer Linear Programs with Small-scale Optimizer and Small Training Dataset", + "title": "Light-MILPopt: Solving Large-scale Mixed Integer Linear Programs with Lightweight Optimizer and Small-scale Training Dataset", "authors": [ "Huigen Ye", "Hua Xu", @@ -4777,7 +4777,7 @@ }, { "id": 19534, - "title": "Enable Lanuguage Models to Implicitly Learn Self-Improvement From Data", + "title": "Enabling Lanuguage Models to Implicitly Learn Self-Improvement", "authors": [ "Ziqi Wang", "Le Hou", @@ -4798,7 +4798,7 @@ }, { "id": 19290, - "title": "Bootstrapping Variational Information Pursuit with Foundation Models for Interpretable Image Classification", + "title": "Bootstrapping Variational Information Pursuit with Large Language and Vision Models for Interpretable Image Classification", "authors": [ "Aditya Chattopadhyay", "Kwan Ho Ryan Chan", @@ -4819,7 +4819,7 @@ "authors": [ "Robin Louiset", "Edouard Duchesnay", - "Grigis Antoine", + "Antoine Grigis", "Pietro Gori" ], "abstract": "Contrastive Analysis is a sub-field of Representation Learning that aims at separating 1) salient factors of variation - that only exist in the target dataset (i.e., diseased subjects) in contrast with 2) common factors of variation between target and background (i.e., healthy subjects) datasets. Despite their relevance, current models based on Variational Auto-Encoders have shown poor performance in learning semantically-expressive representations. On the other hand, Contrastive Representation Learning has shown tremendous performance leaps in various applications (classification, clustering, etc.). In this work, we propose to leverage the ability of Contrastive Learning to learn semantically expressive representations when performing Contrastive Analysis. Namely, we reformulate Contrastive Analysis under the lens of the InfoMax Principle and identify two Mutual Information terms to maximize and one to minimize. We decompose the two first terms into an Alignment and a Uniformity term, as commonly done in Contrastive Learning. Then, we motivate a novel Mutual Information minimization strategy to prevent information leakage between common and salient distributions. We validate our method on datasets designed to assess the pattern separation capability in Contrastive Analysis, including MNIST superimposed on CIFAR10, CelebA accessories, dSprites item superimposed on a digit grid, and three medical datasets.", @@ -4838,7 +4838,7 @@ "title": "Matrix Manifold Neural Networks++", "authors": [ "Xuan Son Nguyen", - "Yang", + "Shuo Yang", "Aymeric Histace" ], "abstract": "Deep neural networks (DNNs) on Riemannian manifolds have garnered increasing interest in various applied areas. For instance, DNNs on spherical and hyperbolic manifolds have been designed to solve a wide range of computer vision and nature language processing tasks. One of the key factors that contribute to the success of these networks is that spherical and hyperbolic manifolds have the rich algebraic structures of gyrogroups and gyrovector spaces. This enables principled and effective generalizations of the most successful DNNs to these manifolds. Recently, some works have shown that many concepts in the theory of gyrogroups and gyrovector spaces can also be generalized to matrix manifolds such as Symmetric Positive Definite (SPD) and Grassmann manifolds. As a result, some building blocks for SPD and Grassmann neural networks, e.g., isometric models and multinomial logistic regression (MLR) can be derived in a way that is fully analogous to their spherical and hyperbolic counterparts. Building upon these works, in this paper, we design fully-connected (FC) and convolutional layers for SPD neural networks. We also develop MLR on Symmetric Positive Semi-definite (SPSD) manifolds, and propose a method for performing backpropagation with the Grassmann logarithmic map in the projector perspective. We demonstrate the effectiveness of the proposed approach in the human action recognition and node classification tasks.", @@ -4873,7 +4873,7 @@ "authors": [ "Isaac Reid", "Eli Berger", - "Krzysztof Choromanski", + "Krzysztof Marcin Choromanski", "Adrian Weller" ], "abstract": "We present a novel quasi-Monte Carlo mechanism to improve graph-based sampling, coined repelling random walks. By inducing correlations between the trajectories of an interacting ensemble such that their marginal transition probabilities are unmodified, we are able to explore the graph more efficiently, improving the concentration of statistical estimators whilst leaving them unbiased. The mechanism has a trivial drop-in implementation. We showcase the effectiveness of repelling random walks in a range of settings including estimation of graph kernels, the PageRank vector and graphlet concentrations. We provide detailed experimental evaluation and robust theoretical guarantees. To our knowledge, repelling random walks constitute the first rigorously studied quasi-Monte Carlo scheme correlating the directions of walkers on a graph, inviting new research in this exciting nascent domain.", @@ -4893,7 +4893,7 @@ "Shengbang Tong", "Tianjiao Ding", "Xili Dai", - "Benjamin Haeffele", + "Benjamin David Haeffele", "Rene Vidal", "Yi Ma" ], @@ -4910,12 +4910,12 @@ }, { "id": 19530, - "title": "Domain-agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations", + "title": "DDMI: Domain-agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations", "authors": [ "Dogyun Park", "Sihyeon Kim", "Sojin Lee", - "Hyunwoo Kim" + "Hyunwoo J. Kim" ], "abstract": "Recent studies have introduced a new class of generative models for synthesizing implicit neural representations (INRs) that capture arbitrary continuous signals in various domains.These models opened the door for domain-agnostic generative models, but they often fail to achieve high-quality generation.We observed that the existing methods generate the weights of neural networks to parameterize INRs and evaluate the network with fixed positional embeddings (PEs).Arguably, this architecture limits the expressive power of generative models and results in low-quality INR generation.To address this limitation, we propose Domain-agnostic Latent Diffusion Model for INRs (DDMI) that generates adaptive positional embeddings instead of neural networks' weights.Specifically, we develop a Discrete-to-continuous space Variational AutoEncoder (D2C-VAE), which seamlessly connects discrete data and the continuous signal functions in the shared latent space. Additionally, we introduce a novel conditioning mechanism for evaluating INRs with the generated hierarchically decomposed basis fields to further enhance expressive power.Extensive experiments across four modalities, \\eg, 2D images, 3D shapes, Neural Radiance Fields, and videos, with seven benchmark datasets, demonstrate the versatility of DDMI and its superior performance compared to the existing INR generative models.", "type": "Poster", @@ -4930,7 +4930,7 @@ "id": 19529, "title": "Conformal Risk Control", "authors": [ - "Anastasios Angelopoulos", + "Anastasios Nikolas Angelopoulos", "Stephen Bates", "Adam Fisch", "Lihua Lei", @@ -4970,7 +4970,7 @@ "authors": [ "Iosif Sakos", "Stefanos Leonardos", - "Stelios Stavroulakis", + "Stelios Andrew Stavroulakis", "William Overman", "Ioannis Panageas", "Georgios Piliouras" @@ -4986,7 +4986,7 @@ }, { "id": 19756, - "title": "MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", + "title": "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework", "authors": [ "Sirui Hong", "Mingchen Zhuge", @@ -4995,8 +4995,8 @@ "Yuheng Cheng", "Jinlin Wang", "Ceyao Zhang", - "zili wang", - "Steven Yau", + "Zili Wang", + "Steven Ka Shing Yau", "Zijuan Lin", "Liyang Zhou", "Chenyu Ran", @@ -5074,7 +5074,7 @@ "id": 19523, "title": "Self-Supervised Heterogeneous Graph Learning: a Homophily and Heterogeneity View", "authors": [ - "YUJIE MO", + "Yujie Mo", "Feiping Nie", "Ping Hu", "Heng Tao Shen", @@ -5097,7 +5097,7 @@ "authors": [ "Feng Hong", "Jiangchao Yao", - "YUEMING LYU", + "Yueming Lyu", "Zhihan Zhou", "Ivor Tsang", "Ya Zhang", @@ -5172,7 +5172,7 @@ "Samar Khanna", "Gengchen Mai", "Marshall Burke", - "David Lobell", + "David B. Lobell", "Stefano Ermon" ], "abstract": "The application of machine learning (ML) in a range of geospatial tasks is increasingly common but often relies on globally available covariates such as satellite imagery that can either be expensive or lack predictive power.Here we explore the question of whether the vast amounts of knowledge found in Internet language corpora, now compressed within large language models (LLMs), can be leveraged for geospatial prediction tasks. We first demonstrate that LLMs embed remarkable spatial information about locations, but naively querying LLMs using geographic coordinates alone is ineffective in predicting key indicators like population density. We then present GeoLLM, a novel method that can effectively extract geospatial knowledge from LLMs with auxiliary map data from OpenStreetMap.We demonstrate the utility of our approach across multiple tasks of central interest to the international community, including the measurement of population density and economic livelihoods.Across these tasks, our method demonstrates a 70\\% improvement in performance (measured using Pearson's $r^2$) relative to baselines that use nearest neighbors or use information directly from the prompt, and performance equal to or exceeding satellite-based benchmarks in the literature.With GeoLLM, we observe that GPT-3.5 outperforms Llama 2 and RoBERTa by 19\\% and 51\\% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset.Our experiments reveal that LLMs are remarkably sample-efficient, rich in geospatial information, and robust across the globe.Crucially, GeoLLM shows promise in mitigating the limitations of existing geospatial covariates and complementing them well.", @@ -5215,7 +5215,7 @@ "Eric Mitchell", "Rafael Rafailov", "Archit Sharma", - "Christopher Manning", + "Christopher D Manning", "Chelsea Finn", "Stefano Ermon" ], @@ -5233,7 +5233,7 @@ "title": "The Reasonableness Behind Unreasonable Translation Capability of Large Language Model", "authors": [ "Tingchen Fu", - "lemao liu", + "Lemao Liu", "Deng Cai", "Guoping Huang", "Shuming Shi", @@ -5258,7 +5258,7 @@ "Chenlin Meng", "Robin Rombach", "Marshall Burke", - "David Lobell", + "David B. Lobell", "Stefano Ermon" ], "abstract": "Diffusion models have achieved state-of-the-art results on many modalities including images, speech, and video. However, existing models are not tailored to support remote sensing data, which is widely used in important applications including environmental monitoring and crop-yield prediction. Satellite images are significantly different from natural images -- they can be multi-spectral, irregularly sampled across time -- and existing diffusion models trained on images from the Web do not support them. Furthermore, remote sensing data is inherently spatio-temporal, requiring conditional generation tasks not supported by traditional methods based on captions or images. In this paper, we present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets .As text-based captions are sparsely available for satellite images, we incorporate the associated metadata such as geolocation as conditioning information. Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, multi-spectral superrresolution and in-painting. Our method outperforms previous state-of-the-art methods for satellite image generation and is the first large-scale _generative_ foundation model for satellite imagery.The project website can be found here: https://samar-khanna.github.io/DiffusionSat/", @@ -5314,7 +5314,7 @@ "authors": [ "Soobin Um", "Suhyeon Lee", - "Jong Ye" + "Jong Chul Ye" ], "abstract": "We explore the problem of generating minority samples using diffusion models. The minority samples are instances that lie on low-density regions of a data manifold. Generating a sufficient number of such minority instances is important, since they often contain some unique attributes of the data. However, the conventional generation process of the diffusion models mostly yields majority samples (that lie on high-density regions of the manifold) due to their high likelihoods, making themselves ineffective and time-consuming for the minority generating task. In this work, we present a novel framework that can make the generation process of the diffusion models focus on the minority samples. We first highlight that Tweedie's denoising formula yields favorable results for majority samples. The observation motivates us to introduce a metric that describes the uniqueness of a given sample. To address the inherent preference of the diffusion models w.r.t. the majority samples, we further develop *minority guidance*, a sampling technique that can guide the generation process toward regions with desired likelihood levels. Experiments on benchmark real datasets demonstrate that our minority guidance can greatly improve the capability of generating high-quality minority samples over existing generative samplers. We showcase that the performance benefit of our framework persists even in demanding real-world scenarios such as medical imaging, further underscoring the practical significance of our work. Code is available at https://github.com/soobin-um/minority-guidance.", "type": "Poster", @@ -5368,7 +5368,7 @@ "authors": [ "Dongjun Kim", "Chieh-Hsin Lai", - "WeiHsiang Liao", + "Wei-Hsiang Liao", "Naoki Murata", "Yuhta Takida", "Toshimitsu Uesaka", @@ -5408,7 +5408,7 @@ }, { "id": 19513, - "title": "Droplets of Good Representations: Grokking as a First Order Phase Transition in Two Layer Networks", + "title": "Grokking as a First Order Phase Transition in Two Layer Networks", "authors": [ "Noa Rubin", "Inbar Seroussi", @@ -5433,9 +5433,9 @@ "Yuhta Takida", "Toshimitsu Uesaka", "Dongjun Kim", - "WeiHsiang Liao", + "Wei-Hsiang Liao", "Yuki Mitsufuji", - "Zico Kolter", + "J Zico Kolter", "Ruslan Salakhutdinov", "Stefano Ermon" ], @@ -5506,8 +5506,8 @@ "authors": [ "Weiyu Liu", "Geng Chen", - "Jiayuan Mao", "Joy Hsu", + "Jiayuan Mao", "Jiajun Wu" ], "abstract": "This paper presents a framework for learning state and action abstractions in sequential decision-making domains. Our framework, planning abstraction from language (PARL), utilizes language-annotated demonstrations to automatically discover a symbolic and abstract action space and induce a latent state abstraction based on it. PARL consists of three stages: 1) recovering object-level and action concepts, 2) learning state abstractions, abstract action feasibility, and transition models, and 3) applying low-level policies for abstract actions. During inference, given the task description, PARL first makes abstract action plans using the latent transition and feasibility functions, then refines the high-level plan using low-level policies. PARL generalizes across scenarios involving novel object instances and environments, unseen concept compositions, and tasks that require longer planning horizons than settings it is trained on.", @@ -5525,7 +5525,7 @@ "authors": [ "Yite Wang", "Jiahao Su", - "Lu", + "Hanlin Lu", "Cong Xie", "Tianyi Liu", "Jianbo Yuan", @@ -5565,9 +5565,9 @@ }, { "id": 19505, - "title": "Evaluating Language Models Through Negotiations", + "title": "Evaluating Language Model Agency Through Negotiations", "authors": [ - "Tim R. Davidson", + "Tim Ruben Davidson", "Veniamin Veselovsky", "Michal Kosinski", "Robert West" @@ -5583,7 +5583,7 @@ }, { "id": 19503, - "title": "Step-Back Prompting Enables Reasoning Via Abstraction in Large Language Models", + "title": "Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models", "authors": [ "Huaixiu Steven Zheng", "Swaroop Mishra", @@ -5606,9 +5606,9 @@ "id": 19502, "title": "Identifying Representations for Intervention Extrapolation", "authors": [ - "Sorawit (James) Saengkyongam", + "Sorawit Saengkyongam", "Elan Rosenfeld", - "Pradeep K Ravikumar", + "Pradeep Kumar Ravikumar", "Niklas Pfister", "Jonas Peters" ], @@ -5626,7 +5626,7 @@ "title": "Privately Aligning Language Models with Reinforcement Learning", "authors": [ "Fan Wu", - "Huseyin Inan", + "Huseyin A Inan", "Arturs Backurs", "Varun Chandrasekaran", "Janardhan Kulkarni", @@ -5647,7 +5647,7 @@ "authors": [ "Yifan Jiang", "Hao Tang", - "Jen-Hao Chang", + "Jen-Hao Rick Chang", "Liangchen Song", "Zhangyang Wang", "Liangliang Cao" @@ -5714,7 +5714,7 @@ "Zhigang Tu", "Xin Chen", "Xiaohang Zhan", - "Gang Yu", + "Gang YU", "Ying Shan" ], "abstract": "Previous motion generation methods are limited to the pre-rigged 3D human model, hindering their applications in the animation of various non-rigged characters. In this work, we present TapMo, a Text-driven Animation PIpeline for synthesizing Motion in a broad spectrum of skeleton-free 3D characters. The pivotal innovation in TapMo is its use of shape deformation-aware features as a condition to guide the diffusion model, thereby enabling the generation of mesh-specific motions for various characters. Specifically, TapMo comprises two main components - Mesh Handle Predictor and Shape-aware Diffusion Module. Mesh Handle Predictor predicts the skinning weights and clusters mesh vertices into adaptive handles for deformation control, which eliminates the need for traditional skeletal rigging. Shape-aware Motion Diffusion synthesizes motion with mesh-specific adaptations. This module employs text-guided motions and mesh features extracted during the first stage, preserving the geometric integrity of the animations by accounting for the character's shape and deformation. Trained in a weakly-supervised manner, TapMo can accommodate a multitude of non-human meshes, both with and without associated text motions. We demonstrate the effectiveness and generalizability of TapMo through rigorous qualitative and quantitative experiments. Our results reveal that TapMo consistently outperforms existing auto-animation methods, delivering superior-quality animations for both seen or unseen heterogeneous 3D characters.", @@ -5732,9 +5732,9 @@ "authors": [ "Sherry Yang", "Yilun Du", - "Seyed Ghasemipour", + "Seyed Kamyar Seyed Ghasemipour", "Jonathan Tompson", - "Leslie Kaelbling", + "Leslie Pack Kaelbling", "Dale Schuurmans", "Pieter Abbeel" ], @@ -5786,7 +5786,7 @@ }, { "id": 19490, - "title": "A Study of Generalization in Offline Reinforcement Learning", + "title": "The Generalization Gap in Offline Reinforcement Learning", "authors": [ "Ishita Mediratta", "Qingfei You", @@ -5806,7 +5806,7 @@ "id": 19713, "title": "Protein Discovery with Discrete Walk-Jump Sampling", "authors": [ - "Nathan Frey", + "Nathan C. Frey", "Dan Berenberg", "Karina Zadorozhny", "Joseph Kleinhenz", @@ -5835,7 +5835,7 @@ "authors": [ "Hong Liu", "Zhiyuan Li", - "David Hall", + "David Leo Wright Hall", "Percy Liang", "Tengyu Ma" ], @@ -5852,7 +5852,7 @@ "id": 19486, "title": "T-Rep: Representation Learning for Time Series using Time-Embeddings", "authors": [ - "Archibald Fraikin", + "Archibald Felix Fraikin", "Adrien Bennetot", "Stephanie Allassonniere" ], @@ -5907,13 +5907,13 @@ }, { "id": 19483, - "title": "Retrieval-Based Reconstruction For Time-series Contrastive Learning", + "title": "REBAR: Retrieval-Based Reconstruction for Time-series Contrastive Learning", "authors": [ "Maxwell Xu", "Alexander Moreno", "Hui Wei", - "Benjamin M Marlin", - "James Rehg" + "Benjamin Marlin", + "James Matthew Rehg" ], "abstract": "The success of self-supervised contrastive learning hinges on identifying positive data pairs that, when pushed together in embedding space, encode useful information for subsequent downstream tasks. However, in time-series, this is challenging because creating positive pairs via augmentations may break the original semantic meaning. We hypothesize that if we can retrieve information from one subsequence to successfully reconstruct another subsequence, then they should form a positive pair. Harnessing this intuition, we introduce our novel approach: REtrieval-BAsed Reconstruction (REBAR) contrastive learning. First, we utilize a convolutional cross-attention architecture to calculate the REBAR error between two different time-series. Then, through validation experiments, we show that REBAR error is a predictor for mutual class membership, justifying its usage as a positive/negative labeler. Finally, once integrated into a contrastive learning framework, our REBAR method is able to learn an embedding that achieves state-of-the-art performance on downstream tasks across diverse modalities.", "type": "Poster", @@ -5930,10 +5930,10 @@ "authors": [ "Fei Kong", "Jinhao Duan", - "ruipeng ma", + "RuiPeng Ma", "Heng Tao Shen", - "Xiaofeng Zhu", "Xiaoshuang Shi", + "Xiaofeng Zhu", "Kaidi Xu" ], "abstract": "Recently, diffusion models have achieved remarkable success in generating tasks, including image and audio generation. However, like other generative models, diffusion models are prone to privacy issues. In this paper, we propose an efficient query-based membership inference attack (MIA), namely Proximal Initialization Attack (PIA), which utilizes groundtruth trajectory obtained by $\\epsilon$ initialized in $t=0$ and predicted point to infer memberships. Experimental results indicate that the proposed method can achieve competitive performance with only two queries on both discrete-time and continuous-time diffusion models. Moreover, previous works on the privacy of diffusion models have focused on vision tasks without considering audio tasks. Therefore, we also explore the robustness of diffusion models to MIA in the text-to-speech (TTS) task, which is an audio generation task. To the best of our knowledge, this work is the first to study the robustness of diffusion models to MIA in the TTS task. Experimental results indicate that models with mel-spectrogram (image-like) output are vulnerable to MIA, while models with audio output are relatively robust to MIA.", @@ -5970,7 +5970,7 @@ "id": 19481, "title": "Neural Rate Control for Learned Video Compression", "authors": [ - "yiwei zhang", + "Yiwei Zhang", "Guo Lu", "Yunuo Chen", "Shen Wang", @@ -5989,12 +5989,12 @@ }, { "id": 19480, - "title": "Understanding Expressivity of Neural KG Reasoning from Rule Structure Learning", + "title": "Understanding Expressivity of GNN in Rule Learning", "authors": [ "Haiquan Qiu", "Yongqi Zhang", "Yong Li", - "Quanming Yao" + "quanming yao" ], "abstract": "Knowledge graph (KG) reasoning refers to the task of deducing new facts from the existing facts in KG, which has been applied in many fields. Recently, Graph Neural Networks (GNNs) with tail entity scoring achieve the state-of-the-art performance on KG reasoning. However, the theoretical understandings for these GNNs are either lacking or focusing on single-relational graphs, leaving what the kind of rule structures these GNNs can learn an open problem. We propose to fill the above gap in this paper. Specifically, GNNs with tail entity scoring are unified into a common framework. Then, we analyze their expressivity by formally describing the rule structures they can learn and theoretically demonstrating their superiority. These results further inspire us to propose a novel labeling strategy to learn more rule structures in KG reasoning. Experimental results are consistent with our theoretical findings and verify the effectiveness of our proposed method.", "type": "Poster", @@ -6010,7 +6010,7 @@ "title": "OPTIMAL ROBUST MEMORIZATION WITH RELU NEURAL NETWORKS", "authors": [ "Lijia Yu", - "XIAOSHAN GAO", + "Xiao-Shan Gao", "Lijun Zhang" ], "abstract": "Memorization with neural networks is to study the expressive power of neural networks to interpolate a finite classification data set, which is closely related to the generalizability of deep learning. However, the important problem of robust memorization has not been thoroughly studied. In this paper, several basic problems about robust memorization are solved. First, we prove that it is NP-hard to compute neural networks with certain simple structures, which are robust memorization. A network hypothesis space is called optimal robust memorization for a data set if it can achieve robust memorization for any budget less than half the separation bound of the data set. Second, we explicitly construct neural networks with O(N n) parameters for optimal robust memorization of any data set with dimension n and size N . We also give a lower bound for the width of networks to achieve optimal robust memorization. Finally, we explicitly construct neural networks withO(N n log n) parameters for optimal robust memorization of any binary classification data set by controlling the Lipschitz constant of the network.", @@ -6042,7 +6042,7 @@ }, { "id": 18091, - "title": "Generating Images in Context with Multimodal Large Language Models", + "title": "Kosmos-G: Generating Images in Context with Multimodal Large Language Models", "authors": [ "Xichen Pan", "Li Dong", @@ -6064,8 +6064,8 @@ "id": 19477, "title": "LCOT: Linear Circular Optimal Transport", "authors": [ - "ROCIO DIAZ MARTIN", - "Ivan Medri", + "Rocio P Diaz Martin", + "Ivan Vladimir Medri", "Yikun Bai", "Xinran Liu", "Kangbai Yan", @@ -6146,7 +6146,7 @@ "id": 19473, "title": "The Devil is in the Object Boundary: Towards Annotation-free Instance Segmentation using Foundation Models", "authors": [ - "cheng shi", + "Cheng Shi", "Sibei Yang" ], "abstract": "Foundation models, pre-trained on a large amount of data have demonstrated impressive zero-shot capabilities in various downstream tasks. However, in object detection and instance segmentation, two fundamental computer vision tasks heavily reliant on extensive human annotations, foundation models such as SAM and DINO struggle to achieve satisfactory performance. In this study, we reveal that the devil is in the object boundary, $\\textit{i.e.}$, these foundation models fail to discern boundaries between individual objects. For the first time, we probe that CLIP, which has never accessed any instance-level annotations, can provide a highly beneficial and strong instance-level boundary prior in the clustering results of its particular intermediate layer. Following this surprising observation, we propose $\\textbf{\\textit{Zip}}$ which $\\textbf{Z}$ips up CL$\\textbf{ip}$ and SAM in a novel classification-first-then-discovery pipeline, enabling annotation-free, complex-scene-capable, open-vocabulary object detection and instance segmentation. Our Zip significantly boosts SAM's mask AP on COCO dataset by 12.5\\% and establishes state-of-the-art performance in various settings, including training-free, self-training, and label-efficient finetuning. Furthermore, annotation-free Zip even achieves comparable performance to the best-performing open-vocabulary object detecters using base annotations.", @@ -6160,11 +6160,11 @@ }, { "id": 19472, - "title": "Neurosymbolic Grounding for Compositional Generalization", + "title": "Neurosymbolic Grounding for Compositional World Models", "authors": [ "Atharva Sehgal", "Arya Grayeli", - "Jennifer Sun", + "Jennifer J. Sun", "Swarat Chaudhuri" ], "abstract": "We introduce Cosmos, a framework for object-centric world modeling that is designed for compositional generalization (CG), i.e., high performance on unseen input scenes obtained through the composition of known visual \"atoms.\" The central insight behind Cosmos is the use of a novel form of neurosymbolic grounding. Specifically, the framework introduces two new tools: (i) neurosymbolic scene encodings, which represent each entity in a scene using a real vector computed using a neural encoder, as well as a vector of composable symbols describing attributes of the entity, and (ii) a neurosymbolic attention mechanism that binds these entities to learned rules of interaction. Cosmos is end-to-end differentiable; also, unlike traditional neurosymbolic methods that require representations to be manually mapped to symbols, it computes an entity's symbolic attributes using vision-language foundation models. Through an evaluation that considers two different forms of CG on an established blocks-pushing domain, we show that the framework establishes a new state-of-the-art for CG in world modeling.", @@ -6186,7 +6186,7 @@ "Zhengyu Chen", "Tianyi Liu", "Jianbo Yuan", - "Bryan Plummer", + "Bryan A. Plummer", "Zhaoran Wang", "Hongxia Yang" ], @@ -6205,10 +6205,10 @@ "authors": [ "Zhang-Wei Hong", "Idan Shenfeld", - "Johnson (Tsun-Hsuan) Wang", + "Tsun-Hsuan Wang", "Yung-Sung Chuang", "Aldo Pareja", - "James R Glass", + "James R. Glass", "Akash Srivastava", "Pulkit Agrawal" ], @@ -6247,10 +6247,10 @@ }, { "id": 19469, - "title": "Generative Adversarial Policy Network for Modelling Protein Complexes", + "title": "Deep Reinforcement Learning for Modelling Protein Complexes", "authors": [ - "Tao Feng", "Ziqi Gao", + "Tao Feng", "Jiaxuan You", "Chenyi Zi", "Yan Zhou", @@ -6305,9 +6305,9 @@ }, { "id": 19467, - "title": "DAM: A Foundation Model for Forecasting", + "title": "DAM: Towards a Foundation Model for Forecasting", "authors": [ - "Luke Darlow", + "Luke Nicholas Darlow", "Qiwen Deng", "Ahmed Hassan", "Martin Asenov", @@ -6331,7 +6331,7 @@ "authors": [ "Xiangyu Dong", "Xingyi Zhang", - "Sibo WANG" + "Sibo Wang" ], "abstract": "Graph-level anomaly detection has gained significant attention as it finds many applications in various domains, such as cancer diagnosis and enzyme prediction. However, existing methods fail to capture the underlying properties of graph anomalies, resulting in unexplainable framework design and unsatisfying performance. In this paper, we take a step back and re-investigate the spectral differences between anomalous and normal graphs. Our main observation shows a significant disparity in the accumulated spectral energy between these two classes. Moreover, we prove that the accumulated spectral energy of the graph signal can be represented by its Rayleigh Quotient, indicating that the Rayleigh Quotient is a driving factor behind the anomalous properties of graphs. Motivated by this, we propose Rayleigh Quotient Graph Neural Network (RQGNN), the first spectral GNN for graph-level anomaly detection, providing a new perspective on exploring the inherent spectral features of anomalous graphs. Specifically, we introduce a novel framework that consists of two components: the Rayleigh Quotient learning component (RQL) and Chebyshev Wavelet GNN with RQ-pooling (CWGNN-RQ). RQL explicitly captures the Rayleigh Quotient of graphs and CWGNN-RQ implicitly explores the spectral space of graphs. Extensive experiments on 10 real-world datasets show that RQGNN outperforms the best rival by 6.74% in Macro-F1 score and 1.44% in AUC, demonstrating the effectiveness of our framework.", "type": "Poster", @@ -6351,7 +6351,7 @@ "Federico Barbero", "Ameya Velingker", "Amin Saberi", - "Michael Bronstein", + "Michael M. Bronstein", "Francesco Di Giovanni" ], "abstract": "Graph Neural Networks (GNNs) are popular models for machine learning on graphs that typically follow the message-passing paradigm, whereby the feature of a node is updated recursively upon aggregating information over its neighbors. While exchanging messages over the input graph endows GNNs with a strong inductive bias, it can also make GNNs susceptible to \\emph{over-squashing}, thereby preventing them from capturing long-range interactions in the given graph. To rectify this issue, {\\em graph rewiring} techniques have been proposed as a means of improving information flow by altering the graph connectivity. In this work, we identify three desiderata for graph-rewiring: (i) reduce over-squashing, (ii) respect the locality of the graph, and (iii) preserve the sparsity of the graph. We highlight fundamental trade-offs that occur between {\\em spatial} and {\\em spectral} rewiring techniques; while the former often satisfy (i) and (ii) but not (iii), the latter generally satisfy (i) and (iii) at the expense of (ii). We propose a novel rewiring framework that satisfies all of (i)--(iii) through a locality-aware sequence of rewiring operations. We then discuss a specific instance of such rewiring framework and validate its effectiveness on several real-world benchmarks, showing that it either matches or significantly outperforms existing rewiring approaches.", @@ -6426,7 +6426,7 @@ "Zitong Wang", "Yihang Yao", "Henry Lam", - "DING ZHAO" + "Ding Zhao" ], "abstract": "Offline reinforcement learning (RL) offers a promising direction for learning policies from pre-collected datasets without requiring further interactions with the environment. However, existing methods struggle to handle out-of-distribution (OOD) extrapolation errors, especially in sparse reward or scarce data settings. In this paper, we propose a novel training algorithm called Conservative Density Estimation (CDE), which addresses this challenge by explicitly imposing constraints on the state-action occupancy stationary distribution. CDE overcomes the limitations of existing approaches, such as the stationary distribution correction method, by addressing the support mismatch issue in marginal importance sampling. Our method achieves state-of-the-art performance on the D4RL benchmark. Notably, CDE consistently outperforms baselines in challenging tasks with sparse rewards or insufficient data, demonstrating the advantages of our approach in addressing the extrapolation error problem in offline RL.", "type": "Poster", @@ -6445,10 +6445,10 @@ "Hailey Schoelkopf", "Keiran Paster", "Marco Dos Santos", - "Stephen McAleer", - "Qiaochu Jiang", + "Stephen Marcus McAleer", + "Albert Q. Jiang", "Jia Deng", - "Stella R Biderman", + "Stella Biderman", "Sean Welleck" ], "abstract": "We present Llemma, a large language model for mathematics. We continue pretraining Code Llama on the Proof-Pile-2, a mixture of scientific papers, web data containing mathematics, and mathematical code, yielding Llemma. On the MATH benchmark Llemma outperforms all known openly released models, as well as the unreleased Minerva model suite on an equi-parameter basis. Moreover, Llemma is capable of tool use and formal theorem proving without any finetuning. We openly release all artifacts, including 7 billion and 34 billion parameter models, the Proof-Pile-2, and code to replicate our experiments.", @@ -6466,7 +6466,7 @@ "id": 19458, "title": "NfgTransformer: Equivariant Representation Learning for Normal-form Games", "authors": [ - "SIQI LIU", + "Siqi Liu", "Luke Marris", "Georgios Piliouras", "Ian Gemp", @@ -6524,7 +6524,7 @@ "authors": [ "Siyu Ren", "Zhiyong Wu", - "Kenny Zhu" + "Kenny Q. Zhu" ], "abstract": "Neural language models are probabilistic models of human text. They are predominantly trained using maximum likelihood estimation (MLE), which is equivalent to minimizing the forward cross-entropy between the empirical data distribution and the model distribution. However, various degeneration phenomena are still widely observed when decoding from the distributions learned by such models. We establish that the forward cross-entropy is suboptimal as a distance metric for aligning human and model distribution due to its (1) recall-prioritization (2) negative diversity ignorance and (3) train-test mismatch. In this paper, we propose Earth Mover Distance Optimization (EMO) for auto-regressive language modeling. EMO capitalizes on the inherent properties of earth mover distance to address the aforementioned challenges. Due to the high complexity of direct computation, we further introduce a feasible upper bound for EMO to ease end-to-end training. Upon extensive evaluation of language models trained using EMO and MLE. We find that EMO demonstrates a consistently better language modeling performance than MLE across domains. Moreover, EMO demonstrates noteworthy enhancements in downstream performance with minimal fine-tuning on merely 25,000 sentences. This highlights the tremendous potential of EMO as a lightweight calibration method for enhancing large-scale pre-trained language models.", "type": "Poster", @@ -6558,9 +6558,9 @@ "title": "Scalable Neural Network Kernels", "authors": [ "Arijit Sehanobish", - "Krzysztof Choromanski", + "Krzysztof Marcin Choromanski", "YUNFAN ZHAO", - "Kumar Dubey", + "Kumar Avinava Dubey", "Valerii Likhosherstov" ], "abstract": "We introduce the concept of scalable neural network kernels (SNNKs), the replacements of regular feedforward layers (FFLs), capable of approximating the latter, but with favorable computational properties. SNNKs effectively disentangle the inputs from the parameters of the neural network in the FFL, only to connect them in the final computation via the dot-product kernel. They are also strictly more expressive, as allowing to model complicated relationships beyond the functions of the dot-products of parameter-input vectors. We also introduce the neural network bundling process that applies SNNKs to compactify deep neural network architectures, resulting in additional compression gains. In its extreme version, it leads to the fully bundled network whose optimal parameters can be expressed via explicit formulae for several loss functions (e.g. mean squared error), opening a possibility to bypass backpropagation. As a by-product of our analysis, we introduce the mechanism of the universal random features (or URFs), applied to instantiate several SNNK variants, and interesting on its own in the context of scalable kernel methods. We provide rigorous theoretical analysis of all these concepts as well as an extensive empirical evaluation, ranging from point-wise kernel estimation to Transformers' fine-tuning with novel adapter layers inspired by SNNKs. Our mechanism provides up to 5x reduction in the number of trainable parameters, while maintaining competitive accuracy.", @@ -6598,7 +6598,7 @@ "title": "Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings", "authors": [ "Ilyass Hammouamri", - "Ismail Khalfaoui Hassani", + "Ismail Khalfaoui-Hassani", "Timoth\u00e9e Masquelier" ], "abstract": "Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer to the time needed for one spike to travel from one neuron to another. These delays matter because they influence the spike arrival times, and it is well-known that spiking neurons respond more strongly to coincident input spikes. More formally, it has been shown theoretically that plastic delays greatly increase the expressivity in SNNs. Yet, efficient algorithms to learn these delays have been lacking. Here, we propose a new discrete-time algorithm that addresses this issue in deep feedforward SNNs using backpropagation, in an offline manner. To simulate delays between consecutive layers, we use 1D convolutions across time. The kernels contain only a few non-zero weights \u2013 one per synapse \u2013 whose positions correspond to the delays. These positions are learned together with the weights using the recently proposed Dilated Convolution with Learnable Spacings (DCLS). We evaluated our method on three datasets: the Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its non spiking version Google Speech Commands v0.02 (GSC) benchmarks, which require detecting temporal patterns. We used feedforward SNNs with two or three hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We showed that fixed random delays help and that learning them helps even more. Furthermore, our method outperformed the state-of-the-art in the three datasets without using recurrent connections and with substantially fewer parameters. Our work demonstrates the potential of delay learning in developing accurate and precise models for temporal data processing. Our code is based on PyTorch / SpikingJelly and available at: https://github.com/Thvnvtos/SNN-delays", @@ -6616,13 +6616,13 @@ "id": 19446, "title": "Learning 3D Particle-based Simulators from RGB-D Videos", "authors": [ - "William Whitney", + "William F Whitney", "Tatiana Lopez-Guevara", "Tobias Pfaff", "Yulia Rubanova", "Thomas Kipf", - "Kimberly Stachenfeld", - "Kelsey Allen" + "Kim Stachenfeld", + "Kelsey R Allen" ], "abstract": "Realistic simulation is critical for applications ranging from robotics to animation. Traditional analytic simulators sometimes struggle to capture sufficiently realistic simulation which can lead to problems including the well known \"sim-to-real\" gap in robotics. Learned simulators have emerged as an alternative for better capturing real-world physical dynamics, but require access to privileged ground truth physics information such as precise object geometry or particle tracks. Here we propose a method for learning simulators directly from observations. Visual Particle Dynamics (VPD) jointly learns a latent particle-based representation of 3D scenes, a neural simulator of the latent particle dynamics, and a renderer that can produce images of the scene from arbitrary views. VPD learns end to end from posed RGB-D videos and does not require access to privileged information. Unlike existing 2D video prediction models, we show that VPD's 3D structure enables scene editing and long-term predictions. These results pave the way for downstream applications ranging from video editing to robotic planning.", "type": "Poster", @@ -6637,7 +6637,7 @@ "id": 19444, "title": "Space and time continuous physics simulation from partial observations", "authors": [ - "Steeven Janny", + "Steeven JANNY", "Madiha Nadri", "Julie Digne", "Christian Wolf" @@ -6653,7 +6653,7 @@ }, { "id": 17639, - "title": "Alignment as Reward-Guided Search", + "title": "ARGS: Alignment as Reward-Guided Search", "authors": [ "Maxim Khanov", "Jirayu Burapacheep", @@ -6685,7 +6685,7 @@ "Vladim\u00edr Vondru\u0161", "Theophile Gervet", "Vincent-Pierre Berges", - "John Turner", + "John M Turner", "Oleksandr Maksymets", "Zsolt Kira", "Mrinal Kalakrishnan", @@ -6727,7 +6727,7 @@ "title": "Local Search GFlowNets", "authors": [ "Minsu Kim", - "Yun Taeyoung", + "Taeyoung Yun", "Emmanuel Bengio", "Dinghuai Zhang", "Yoshua Bengio", @@ -6782,10 +6782,10 @@ "Wuyang Chen", "Albert Webson", "Yunxuan Li", - "Vincent Zhao", + "Vincent Y Zhao", "Hongkun Yu", "Kurt Keutzer", - "trevor darrell", + "Trevor Darrell", "Denny Zhou" ], "abstract": "Sparse Mixture-of-Experts (MoE) is a neural architecture design that can be utilized to add learnable parameters to Large Language Models (LLMs) without increasing inference cost. Instruction tuning is a technique for training LLMs to follow instructions. We advocate combining these two approaches, as we find that MoE models benefit more from instruction tuning than dense models. In particular, we conduct empirical studies across three experimental setups: (i) Direct finetuning on individual downstream tasks devoid of instruction tuning; (ii) Instruction tuning followed by in-context few-shot or zero-shot generalization on downstream tasks; and (iii) Instruction tuning supplemented by further finetuning on individual downstream tasks. In the first scenario, MoE models overall underperform dense models of identical computational capacity. This narrative, however, dramatically changes with the introduction of instruction tuning (second and third scenario), used independently or in conjunction with task-specific finetuning. Our most powerful model, FLAN-MOE32B, surpasses the performance of FLAN-PALM62B on four benchmark tasks, while using only a third of the FLOPs. The advancements embodied by FLAN-MOE inspire a reevaluation of the design principles of large-scale, high-performance language models in the framework of task-agnostic learning.", @@ -6946,7 +6946,7 @@ "Yansu HE", "Yuan Yuan", "Yu Liu", - "james zhang", + "James Y. Zhang", "Yujiu Yang", "Hao Wang" ], @@ -6966,7 +6966,7 @@ "Ilia Igashov", "Arne Schneuing", "Marwin Segler", - "Michael Bronstein", + "Michael M. Bronstein", "Bruno Correia" ], "abstract": "Retrosynthesis planning is a fundamental challenge in chemistry which aims at designing multi-step reaction pathways from commercially available starting materials to a target molecule. Each step in multi-step retrosynthesis planning requires accurate prediction of possible precursor molecules given the target molecule and confidence estimates to guide heuristic search algorithms. We model single-step retrosynthesis as a distribution learning problem in a discrete state space. First, we introduce the Markov Bridge Model, a generative framework aimed to approximate the dependency between two intractable discrete distributions accessible via a finite sample of coupled data points. Our framework is based on the concept of a Markov bridge, a Markov process pinned at its endpoints. Unlike diffusion-based methods, our Markov Bridge Model does not need a tractable noise distribution as a sampling proxy and directly operates on the input product molecules as samples from the intractable prior distribution. We then address the retrosynthesis planning problem with our novel framework and introduce RetroBridge, a template-free retrosynthesis modeling approach that achieves state-of-the-art results on standard evaluation benchmarks.", @@ -7039,7 +7039,7 @@ "id": 19371, "title": "Designing Skill-Compatible AI: Methodologies and Frameworks in Chess", "authors": [ - "KARIM HAMADE", + "Karim Hamade", "Reid McIlroy-Young", "Siddhartha Sen", "Jon Kleinberg", @@ -7151,7 +7151,7 @@ "Chenyu Wang", "Sharut Gupta", "Caroline Uhler", - "Tommi Jaakkola" + "Tommi S. Jaakkola" ], "abstract": "High-throughput drug screening -- using cell imaging or gene expression measurements as readouts of drug effect -- is a critical tool in biotechnology to assess and understand the relationship between the chemical structure and biological activity of a drug. Since large-scale screens have to be divided into multiple experiments, a key difficulty is dealing with batch effects, which can introduce systematic errors and non-biological associations in the data. We propose InfoCORE, an Information maximization approach for COnfounder REmoval, to effectively deal with batch effects and obtain refined molecular representations. InfoCORE establishes a variational lower bound on the conditional mutual information of the latent representations given a batch identifier. It adaptively reweights samples to equalize their implied batch distribution. Extensive experiments on drug screening data reveal InfoCORE's superior performance in a multitude of tasks including molecular property prediction and molecule-phenotype retrieval. Additionally, we show results for how InfoCORE offers a versatile framework and resolves general distribution shifts and issues of data fairness by minimizing correlation with spurious features or removing sensitive attributes.", "type": "Poster", @@ -7184,7 +7184,7 @@ "id": 19355, "title": "You Only Query Once: An Efficient Label-Only Membership Inference Attack", "authors": [ - "Yutong Wu", + "YUTONG WU", "Han Qiu", "Shangwei Guo", "Jiwei Li", @@ -7219,7 +7219,7 @@ "id": 19353, "title": "Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks", "authors": [ - "Vaidehi Ramesh Patil", + "Vaidehi Patil", "Peter Hase", "Mohit Bansal" ], @@ -7237,10 +7237,10 @@ "title": "SpikePoint: An Efficient Point-based Spiking Neural Network for Event Cameras Action Recognition", "authors": [ "Hongwei Ren", - "Yue ZHOU", - "Haotian FU", - "Yulong Huang", + "Yue Zhou", "Xiaopeng LIN", + "Yulong Huang", + "Haotian FU", "Jie Song", "Bojun Cheng" ], @@ -7263,7 +7263,7 @@ "Tengge Hu", "Huakun Luo", "Lintao Ma", - "james zhang", + "James Y. Zhang", "JUN ZHOU" ], "abstract": "Time series forecasting is widely used in extensive applications, such as traffic planning and weather forecasting. However, real-world time series usually present intricate temporal variations, making forecasting extremely challenging. Going beyond the mainstream paradigms of plain decomposition and multiperiodicity analysis, we analyze temporal variations in a novel view of multiscale-mixing, where time series present distinct patterns in different sampling scales. Specifically, the microscopic and the macroscopic information are reflected in fine and coarse scales, respectively, and thereby complex variations are inherently disentangled. Based on this observation, we propose TimeMixer as a fully MLP-based architecture with Past-Decomposable-Mixing (PDM) and Future-Multipredictor-Mixing (FMM) blocks to take full advantage of disentangled multiscale series in both past extraction and future prediction phases. Concretely, PDM applies the decomposition to multiscale series and further mixes the decomposed seasonal and trend components in fine-to-coarse and coarse-to-fine directions separately, which successively aggregates the microscopic seasonal and macroscopic trend information. FMM further ensembles multiple predictors to utilize complementary forecasting capabilities in multiscale observations. Consequently, our proposed TimeMixer is able to achieve consistent state-of-the-art performances in both long-term and short-term forecasting tasks with favorable run-time efficiency.", @@ -7301,7 +7301,7 @@ "Xi Yu", "Sigurd L\u00f8kse", "Robert Jenssen", - "Jose Principe" + "Jose C Principe" ], "abstract": "The information bottleneck (IB) approach is popular to improve the generalization, robustness and explainability of deep neural networks. Essentially, it aims to find a minimum sufficient representation $\\mathbf{t}$ by striking a trade-off between a compression term, which is usually characterized by mutual information $I(\\mathbf{x};\\mathbf{t})$ where $\\mathbf{x}$ refers to the input, and a prediction term usually characterized by $I(y;\\mathbf{t})$ where $y$ is the desired response. Mutual information is for the IB for the most part expressed in terms of the Kullback-Leibler (KL) divergence, which in the regression case corresponds to prediction based on mean squared error (MSE) loss with Gaussian assumption and compression approximated by variational inference. In this paper, we study the IB principle for the regression problem and develop a new way to parameterize the IB with deep neural networks by exploiting favorable properties of the Cauchy-Schwarz (CS) divergence. By doing so, we move away from MSE-based regression and ease estimation by avoiding variational approximations or distributional assumptions. We investigate the improved generalization ability of our proposed CS-IB and demonstrate strong adversarial robustness guarantees. We demonstrate its superior performance on six real-world regression tasks over other popular deep IB approaches. We additionally observe that the solutions discovered by CS-IB always achieve the best trade-off between prediction accuracy and compression ratio in the information plane.", "type": "Poster", @@ -7321,7 +7321,7 @@ "Runlong Zhou", "Qiwen Cui", "Abhishek Gupta", - "Simon Du" + "Simon Shaolei Du" ], "abstract": "Off-policy dynamic programming (DP) techniques such as $Q$-learning have proven to be important in sequential decision-making problems. In the presence of function approximation, however, these techniques often diverge due to the absence of Bellman completeness in the function classes considered, a crucial condition for the success of DP-based methods. In this paper, we show how off-policy learning techniques based on return-conditioned supervised learning (RCSL) are able to circumvent these challenges of Bellman completeness, converging under significantly more relaxed assumptions inherited from supervised learning. We prove there exists a natural environment in which if one uses two-layer multilayer perceptron as the function approximator, the layer width needs to grow *linearly* with the state space size to satisfy Bellman completeness while a constant layer width is enough for RCSL. These findings take a step towards explaining the superior empirical performance of RCSL methods compared to DP-based methods in environments with near-optimal datasets. Furthermore, in order to learn from sub-optimal datasets, we propose a simple framework called MBRCSL, granting RCSL methods the ability of dynamic programming to stitch together segments from distinct trajectories. MBRCSL leverages learned dynamics models and forward sampling to accomplish trajectory stitching while avoiding the need for Bellman completeness that plagues all dynamic programming algorithms. We propose both theoretical analysis and experimental evaluation to back these claims, outperforming state-of-the-art model-free and model-based offline RL algorithms across several simulated robotics problems.", "type": "Poster", @@ -7393,7 +7393,7 @@ "Ashish Seth", "Sonal Kumar", "Utkarsh Tyagi", - "Chandra Kiran Evuru", + "Chandra Kiran Reddy Evuru", "Ramaneswaran S", "S Sakshi", "Oriol Nieto", @@ -7418,7 +7418,7 @@ "Lin Gui", "Min Yang", "Yulan He", - "HUI WANG", + "Hui Wang", "Ruifeng Xu" ], "abstract": "The approach of Reinforcement Learning from Human Feedback (RLHF) is widely used for enhancing pre-trained Language Models (LM), enabling them to better align with human preferences. Existing RLHF-based LMs however require complete retraining whenever new queries or feedback are introduced, as human preferences may differ across different domains or topics. LM retraining is often impracticable in most real-world scenarios, due to the substantial time and computational costs involved, as well as data privacy concerns. To address this limitation, we propose Continual Proximal Policy Optimization (CPPO), a novel method that is able to continually align LM with dynamic human preferences. Specifically, CPPO adopts a weighting strategy to decide which samples should be utilized for enhancing policy learning and which should be used for solidifying past experiences. This seeks a good trade-off between policy learning and knowledge retention. Our experimental results show that CPPO outperforms strong Continuous learning (CL) baselines when it comes to consistently aligning with human preferences. Furthermore, compared to PPO, CPPO offers more efficient and stable learning in non-continual scenarios.", @@ -7449,7 +7449,7 @@ }, { "id": 19335, - "title": "CLAP: Collaborative Adaptation for Checkerboard Learning", + "title": "CLAP: Collaborative Adaptation for Patchwork Learning", "authors": [ "Sen Cui", "Abudukelimu Wuerkaixi", @@ -7489,7 +7489,7 @@ }, { "id": 17967, - "title": "Towards Universal Multi-Modal Personalization: A Language Model Empowered Generative Paradigm", + "title": "Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond", "authors": [ "Tianxin Wei", "Bowen Jin", @@ -7514,11 +7514,11 @@ }, { "id": 19330, - "title": "Sample Relationship from Learning Dynamics Matters for Generalisation", + "title": "lpNTK: Better Generalisation with Less Data via Sample Interaction During Learning", "authors": [ "Shangmin Guo", - "YI REN", - "Stefano Albrecht", + "Yi Ren", + "Stefano V Albrecht", "Kenny Smith" ], "abstract": "Although much research has been done on proposing new models or loss functions to improve the generalisation of artificial neural networks (ANNs), less attention has been directed to the impact of the training data on generalisation. In this work, we start from approximating the interaction between samples, i.e. how learning one sample would modify the model's prediction on other samples. Through analysing the terms involved in weight updates in supervised learning, we find that labels influence the interaction between samples. Therefore, we propose the labelled pseudo Neural Tangent Kernel (lpNTK) which takes label information into consideration when measuring the interactions between samples. We first prove that lpNTK asymptotically converges to the empirical neural tangent kernel in terms of the Frobenius norm under certain assumptions. Secondly, we illustrate how lpNTK helps to understand learning phenomena identified in previous work, specifically the learning difficulty of samples and forgetting events during learning. Moreover, we also show that using lpNTK to identify and remove poisoning training samples does not hurt the generalisation performance of ANNs.", @@ -7643,7 +7643,7 @@ "Lirui Zhao", "Zhiqian Li", "Kaipeng Zhang", - "Gao Peng", + "Peng Gao", "Yu Qiao", "Ping Luo" ], @@ -7677,16 +7677,16 @@ }, { "id": 19321, - "title": "Prometheus: Inducing Evaluation Capability in Language Models", + "title": "Prometheus: Inducing Fine-Grained Evaluation Capability in Language Models", "authors": [ "Seungone Kim", "Jamin Shin", - "yejin cho", + "Yejin Cho", "Joel Jang", "Shayne Longpre", "Hwaran Lee", "Sangdoo Yun", - "Ryan, S Shin", + "Seongjin Shin", "Sungdong Kim", "James Thorne", "Minjoon Seo" @@ -7765,11 +7765,11 @@ "id": 19316, "title": "Combining Axes Preconditioners through Kronecker Approximation for Deep Learning", "authors": [ - "Venkata Sai Surya Subramanyam Duvvuri", + "Sai Surya Duvvuri", "Fnu Devvrit", "Rohan Anil", "Cho-Jui Hsieh", - "Inderjit Dhillon" + "Inderjit S Dhillon" ], "abstract": "Adaptive regularization based optimization methods such as full-matrix Adagrad which use gradient second-moment information hold significant potential for fast convergence in deep neural network (DNN) training, but are memory intensive and computationally demanding for large neural nets. We develop a technique called Combining AxeS PReconditioners (CASPR), which optimizes matrix-shaped DNN parameters by finding different preconditioners for each mode/axis of the parameter and combining them using a Kronecker-sum based approximation. We show tighter convergence guarantees in stochastic optimization compared to a Kronecker product based preconditioner, Shampoo, which arises as a special case of CASPR. Furthermore, our experiments demonstrates that CASPR approximates the gradient second-moment matrix in full-matrix Adagrad more accurately, and shows significant improvement in training and generalization performance compared to existing practical adaptive regularization based methods such as Shampoo and Adam in a variety of tasks including graph neural network on OGBG-molpcba, Transformer on a universal dependencies dataset and auto-regressive large language modeling on C4 dataset.", "type": "Poster", @@ -7784,7 +7784,7 @@ "id": 19315, "title": "DiffEnc: Variational Diffusion with a Learned Encoder", "authors": [ - "Beatrix M. G. Nielsen", + "Beatrix Miranda Ginn Nielsen", "Anders Christensen", "Andrea Dittadi", "Ole Winther" @@ -7802,7 +7802,7 @@ "id": 19314, "title": "One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention", "authors": [ - "Arvind Mahankali", + "Arvind V. Mahankali", "Tatsunori Hashimoto", "Tengyu Ma" ], @@ -7840,7 +7840,7 @@ "title": "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking", "authors": [ "Nikhil Prakash", - "Tamar Shaham", + "Tamar Rott Shaham", "Tal Haklay", "Yonatan Belinkov", "David Bau" @@ -7856,7 +7856,7 @@ }, { "id": 19312, - "title": "Learning Nash equilibria in Rank-1 games: Going beyond the Minty Property", + "title": "Learning Nash Equilibria in Rank-1 Games", "authors": [ "Nikolas Patris", "Ioannis Panageas" @@ -7888,7 +7888,7 @@ }, { "id": 19306, - "title": "The HIM Solution for Legged Locomotion: Minimal Sensors, Efficient Learning, and Substantial Agility", + "title": "Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated Robot Response", "authors": [ "Junfeng Long", "ZiRui Wang", @@ -7949,7 +7949,7 @@ "title": "Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior", "authors": [ "Kai Cui", - "Sascha Hauck", + "Sascha H. Hauck", "Christian Fabian", "Heinz Koeppl" ], @@ -7986,9 +7986,9 @@ "id": 19301, "title": "ED-NeRF: Efficient Text-Guided Editing of 3D Scene With Latent Space NeRF", "authors": [ - "Jangho Park", + "JangHo Park", "Gihyun Kwon", - "Jong Ye" + "Jong Chul Ye" ], "abstract": "Recently, there has been a significant advancement in text-to-image diffusion models, leading to groundbreaking performance in 2D image generation. These advancements have been extended to 3D models, enabling the generation of novel 3D objects from textual descriptions. This has evolved into NeRF editing methods, which allow the manipulation of existing 3D objects through textual conditioning. However, existing NeRF editing techniques have faced limitations in their performance due to slow training speeds and the use of loss functions that do not adequately consider editing. To address this, here we present a novel 3D NeRF editing approach dubbed ED-NeRF by successfully embedding real-world scenes into the latent space of the latent diffusion model (LDM) through a unique refinement layer. This approach enables us to obtain a NeRF backbone that is not only faster but also more amenable to editing compared to traditional image space NeRF editing. Furthermore, we propose an improved loss function tailored for editing by migrating the delta denoising score (DDS) distillation loss, originally used in 2D image editing to the three-dimensional domain. This novel loss function surpasses the well-known score distillation sampling (SDS) loss in terms of suitability for editing purposes. Our experimental results demonstrate that ED-NeRF achieves faster editing speed while producing improved output quality compared to state-of-the-art 3D editing models.", "type": "Poster", @@ -8080,14 +8080,14 @@ }, { "id": 19298, - "title": "Do Large Language Models Know about Facts?", + "title": "Towards Understanding Factual Knowledge of Large Language Models", "authors": [ "Xuming Hu", "Junzhe Chen", "Xiaochuan Li", "Yufei Guo", "Lijie Wen", - "Philip Yu", + "Philip S. Yu", "Zhijiang Guo" ], "abstract": "Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks. The factual knowledge acquired during pretraining and instruction tuning can be useful in various downstream tasks, such as question answering, and language generation. Unlike conventional Knowledge Bases (KBs) that explicitly store factual knowledge, LLMs implicitly store facts in their parameters. Content generated by the LLMs can often exhibit inaccuracies or deviations from the truth, due to facts that can be incorrectly induced or become obsolete over time. To this end, we aim to comprehensively evaluate the extent and scope of factual knowledge within LLMs by designing the benchmark Pinocchio. Pinocchio contains 20K diverse factual questions that span different sources, timelines, domains, regions, and languages. Furthermore, we investigate whether LLMs are able to compose multiple facts, update factual knowledge temporally, reason over multiple pieces of facts, identify subtle factual differences, and resist adversarial examples. Extensive experiments on different sizes and types of LLMs show that existing LLMs still lack factual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing trustworthy artificial intelligence. The dataset Pinocchio and our codes will be publicly available.", @@ -8158,9 +8158,9 @@ "id": 19294, "title": "De novo Protein Design Using Geometric Vector Field Networks", "authors": [ - "weian mao", - "Zheng Sun", + "Weian Mao", "Muzhi Zhu", + "Zheng Sun", "Shuaike Shen", "Lin Yuanbo Wu", "Hao Chen", @@ -8262,7 +8262,7 @@ "title": "On the Learnability of Watermarks for Language Models", "authors": [ "Chenchen Gu", - "Xiang Li", + "Xiang Lisa Li", "Percy Liang", "Tatsunori Hashimoto" ], @@ -8328,8 +8328,8 @@ "Pierre Sermanet", "Tianhe Yu", "Pieter Abbeel", - "Joshua B Tenenbaum", - "Leslie Kaelbling", + "Joshua B. Tenenbaum", + "Leslie Pack Kaelbling", "Andy Zeng", "Jonathan Tompson" ], @@ -8344,7 +8344,7 @@ }, { "id": 19281, - "title": "Domain-Agnostic Molecular Generation with Self-feedback", + "title": "Domain-Agnostic Molecular Generation with Chemical Feedback", "authors": [ "Yin Fang", "Ningyu Zhang", @@ -8364,7 +8364,7 @@ }, { "id": 19280, - "title": "Entropy is not Enough for Test-time Adaptation: From the Perspective of Disentangled Factors", + "title": "Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors", "authors": [ "Jonghyun Lee", "Dahuin Jung", @@ -8448,7 +8448,7 @@ "Changbin Li", "Kangshuo Li", "Yuzhe Ou", - "Lance Kaplan", + "Lance M. Kaplan", "Audun J\u00f8sang", "Jin-Hee Cho", "DONG HYUN JEONG", @@ -8559,11 +8559,11 @@ }, { "id": 19783, - "title": "Generalization in diffusion models arises from geometry-adaptive harmonic representation", + "title": "Generalization in diffusion models arises from geometry-adaptive harmonic representations", "authors": [ "Zahra Kadkhodaie", "Florentin Guth", - "Eero Simoncelli", + "Eero P Simoncelli", "St\u00e9phane Mallat" ], "abstract": "High-quality samples generated with score-based reverse diffusion algorithms provide evidence that deep neural networks (DNN) trained for denoising can learn high-dimensional densities, despite the curse of dimensionality. However, recent reports of memorization of the training set raise the question of whether these networks are learning the ``true'' density of the data. Here, we show that two denoising DNNs trained on non-overlapping subsets of a dataset learn nearly the same score function, and thus the same density, with a surprisingly small number of training images. This strong generalization demonstrates the existence of powerful inductive biases in the DNN architecture and/or training algorithm. We analyze these, demonstrating that the denoiser performs a shrinkage operation in a basis adapted to the underlying image. Examination of these bases reveals oscillating harmonic structures along contours and in homogeneous image regions. We show that trained denoisers are inductively biased towards these geometry-adaptive harmonic representations by demonstrating that they arise even when the network is trained on image classes such as low-dimensional manifolds for which the harmonic basis is suboptimal. Additionally, we show that the denoising performance of the networks is near-optimal when trained on regular image classes for which the optimal basis is known to be geometry-adaptive and harmonic.", @@ -8638,7 +8638,7 @@ "authors": [ "Shikun Feng", "Minghao Li", - "Yinjun JIA", + "Yinjun Jia", "Wei-Ying Ma", "Yanyan Lan" ], @@ -8655,10 +8655,10 @@ "id": 19257, "title": "Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning", "authors": [ - "SHI RUIZHE", + "Ruizhe Shi", "Yuyao Liu", "Yanjie Ze", - "Simon Du", + "Simon Shaolei Du", "Huazhe Xu" ], "abstract": "Offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets. Given recent advances in Large Language Models (LLMs) and their few-shot learning prowess, this paper introduces $\\textbf{La}$nguage Models for $\\textbf{Mo}$tion Control ($\\textbf{LaMo}$), a general framework based on Decision Transformers to effectively use pre-trained Language Models (LMs) for offline RL. Our framework highlights four crucial components: (1) Initializing Decision Transformers with sequentially pre-trained LMs, (2) employing the LoRA fine-tuning method, in contrast to full-weight fine-tuning, to combine the pre-trained knowledge from LMs and in-domain knowledge effectively, (3) using the non-linear MLP transformation instead of linear projections, to generate embeddings, and (4) integrating an auxiliary language prediction loss during fine-tuning to stabilize the LMs and retain their original abilities on languages. Empirical results indicate $\\textbf{LaMo}$ achieves state-of-the-art performance in sparse-reward tasks and closes the gap between value-based offline RL methods and decision transformers in dense-reward tasks. In particular, our method demonstrates superior performance in scenarios with limited data samples.", @@ -8696,8 +8696,8 @@ "authors": [ "Jisu Nam", "Gyuseong Lee", - "Seonwoo Kim", - "In\u00e8s Hyeonsu Kim", + "Sunwoo Kim", + "Hyeonsu Kim", "Hyoungwon Cho", "Seyeon Kim", "Seungryong Kim" @@ -8732,9 +8732,9 @@ "title": "Enhancing Instance-Level Image Classification with Set-Level Labels", "authors": [ "Renyu Zhang", - "Aly Khan", + "Aly A Khan", "Yuxin Chen", - "Robert Grossman" + "Robert L. Grossman" ], "abstract": "Instance-level image classification tasks have traditionally relied on single-instance labels to train models, e.g., few-shot learning and transfer learning. However, set-level coarse-grained labels that capture relationships among instances can provide richer information in real-world scenarios. In this paper, we present a novel approach to enhance instance-level image classification by leveraging set-level labels. We provide a theoretical analysis of the proposed method, including recognition conditions for fast excess risk rate, shedding light on the theoretical foundations of our approach. We conducted experiments on two distinct categories of datasets: natural image datasets and histopathology image datasets. Our experimental results demonstrate the effectiveness of our approach, showcasing improved classification performance compared to traditional single-instance label-based methods. Notably, our algorithm achieves 13\\% improvement in classification accuracy compared to the strongest baseline on the histopathology image classification benchmarks. Importantly, our experimental findings align with the theoretical analysis, reinforcing the robustness and reliability of our proposed method. This work bridges the gap between instance-level and set-level image classification, offering a promising avenue for advancing the capabilities of image classification models with set-level coarse-grained labels.", "type": "Poster", @@ -8754,9 +8754,9 @@ "Antoine Simoulin", "Shuai Yang", "Grey Yang", - "Ryan Rossi", + "Ryan A. Rossi", "Puja Trivedi", - "Nesreen Ahmed" + "Nesreen K. Ahmed" ], "abstract": "Graph neural networks (GNNs) have achieved remarkable success across a wide range of applications, such as recommendation, drug discovery, and question answering. Behind the success of GNNs lies the backpropagation (BP) algorithm, which is the de facto standard for training deep neural networks. However, despite its effectiveness, BP imposes several constraints, which are not only biologically implausible, but also limit the scalability, parallelism, and flexibility in learning neural networks. Examples of such constraints include the storage of neural activities computed in the forward pass for use in the subsequent backward pass, and the dependence of parameter updates on non-local signals. To address these limitations, the forward-forward algorithm (FF) was recently proposed as an alternative to BP in the image classification domain, which trains neural networks by performing two forward passes over positive and negative data. Inspired by this advance, we propose ForwardGNN in this work, a new forward learning procedure for GNNs, which avoids the constraints imposed by BP via an effective layer-wise local forward training. ForwardGNN extends the original FF to deal with graph data and GNNs, and makes it possible to operate without generating negative inputs (hence no longer forward-forward). Further, ForwardGNN enables each layer to learn from both the bottom-up and top-down signals without relying on the backpropagation of errors. Extensive experiments involving five real-world datasets and three representative GNNs show the effectiveness and generality of the proposed forward graph learning framework.", "type": "Poster", @@ -8831,7 +8831,7 @@ "id": 19244, "title": "Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs", "authors": [ - "Aakash Sunil Lahoti", + "Aakash Lahoti", "Stefani Karp", "Ezra Winston", "Aarti Singh", @@ -8923,7 +8923,7 @@ "Yantao Liu", "Amy Xin", "Kaifeng Yun", - "Linlu Gong", + "Linlu GONG", "Nianyi Lin", "Jianhui Chen", "Zhili Wu", @@ -8961,7 +8961,7 @@ "Eric Wallace", "Weijia Shi", "Hannaneh Hajishirzi", - "Noah Smith", + "Noah A. Smith", "Luke Zettlemoyer" ], "abstract": "The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on low-risk text (e.g., out-of-copyright books or government documents), due to its limited size and domain coverage. We present SILO, a new language model that manages this risk-performance tradeoff during inference. SILO is built by (1) training a parametric LM on the Open License Corpus (OLC), a new corpus we curate with 228B tokens of public domain and permissively licensed text and (2) augmenting it with a more general and easily modifiable nonparametric datastore (e.g., containing copyrighted books or news) that is only queried during inference. The datastore allows use of high-risk data without training on it, supports sentence-level data attribution, and enables data producers to opt out from the model by removing content from the store. These capabilities can foster compliance with data-use regulations such as the fair use doctrine in the United States and the GDPR in the European Union. Our experiments show that the parametric LM struggles on its own with domains not covered by OLC. However, access to the datastore greatly improves out of domain performance, closing 90% of the performance gap with an LM trained on the Pile, a more diverse corpus with mostly high-risk text. We also analyze which nonparametric approach works best, where the remaining errors lie, and how performance scales with datastore size. Our results suggest that it is possible to build high quality language models while mitigating legal risk.", @@ -8982,7 +8982,7 @@ "Etai Littwin", "Noam Razin", "Omid Saremi", - "Joshua Susskind", + "Joshua M. Susskind", "Samy Bengio", "Preetum Nakkiran" ], @@ -9002,8 +9002,8 @@ "Runtian Zhai", "Bingbin Liu", "Andrej Risteski", - "Zico Kolter", - "Pradeep K Ravikumar" + "J Zico Kolter", + "Pradeep Kumar Ravikumar" ], "abstract": "Data augmentation is critical to the empirical success of modern self-supervised representation learning, such as contrastive learning and masked language modeling.However, a theoretical understanding of the exact role of the augmentation remains limited.Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator, suggesting that learning a linear probe atop such representation can be connected to RKHS regression.Building on this insight, this work delves into a statistical analysis of augmentation-based pretraining.Starting from the isometry property, a geometric characterization of the target function given by the augmentation, we disentangle the effects of the model and the augmentation,and prove two generalization bounds that are free of model complexity.Our first bound works for an arbitrary encoder, and it is the sum of an estimation error bound incurred by fitting a linear probe, and an approximation error bound by RKHS approximation.Our second bound specifically addresses the casewhere the encoder extracts the top-d eigenspace of a finite-sample-based approximation of the underlying RKHS.A key ingredient in our analysis is the *augmentation complexity*,which we use to quantitatively compare different augmentations and analyze their impact on downstream performance.", "type": "Spotlight Poster", @@ -9045,10 +9045,10 @@ "Maria Lomeli", "Chunting Zhou", "Margaret Li", - "Xi Lin", - "Noah Smith", + "Xi Victoria Lin", + "Noah A. Smith", "Luke Zettlemoyer", - "Scott Yih", + "Wen-tau Yih", "Mike Lewis" ], "abstract": "Language models are currently trained to predict tokens given document prefixes, enabling them to zero shot long form generation and prompting-style tasks which can be reduced to document completion. We instead present IN-CONTEXT PRETRAINING, a new approach where language models are trained on a sequence of related documents, thereby explicitly encouraging them to read and reason across document boundaries. Our approach builds on the fact that current pipelines train by concatenating random sets of shorter documents to create longer context windows; this improves efficiency even though the prior documents provide no signal for predicting the next document. Given this fact, we can do IN-CONTEXT PRETRAINING by simply changing the document ordering so that each context contains related documents, and directly applying existing pretraining pipelines. However, this document sorting problem is challenging. There are billions of documents and we would like the sort to maximize contextual similarity for every document without repeating any data. To do this, we introduce approximate algorithms for finding related documents with efficient nearest neighbor search and constructing coherent batches with a graph cover algorithm. Our experiments show IN-CONTEXT PRETRAINING offers a scalable and simple approach to significantly enhance LM performance: we see notable improvements in tasks that require more complex contextual reasoning, including in-context learning (+8%), reading comprehension (+15%), faithfulness to previous contexts (+16%), long-context reasoning (+5%), and retrieval augmentation (+9%).", @@ -9083,9 +9083,9 @@ "id": 19232, "title": "Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood", "authors": [ - "yaxuan zhu", + "Yaxuan Zhu", "Jianwen Xie", - "Yingnian Wu", + "Ying Nian Wu", "Ruiqi Gao" ], "abstract": "Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming, and there exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models. To close this gap, inspired by the recent efforts of learning EBMs by maximimizing diffusion recovery likelihood (DRL), we propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs defined on increasingly noisy versons of a dataset, paired with an initializer model for each EBM. At each noise level, the two models are jointly estimated within a cooperative training framework: Samples from the initializer serve as starting points that are refined by a few MCMC sampling steps from the EBM. The EBM is then optimized by maximizing recovery likelihood, while the initializer model is optimized by learning from the difference between the refined samples and the initial samples. In addition, we made several practical designs for EBM training to further improve the sample quality. Combining these advances, we significantly boost the generation performance compared to existing EBM methods on CIFAR-10 and ImageNet 32x32. And we have shown that CDRL has great potential to largely reduce the sampling time. We also demonstrate the effectiveness of our models for several downstream tasks, including classifier-free guided generation, compositional generation, image inpainting and out-of-distribution detection.", @@ -9127,7 +9127,7 @@ "Tong Yu", "Saayan Mitra", "Victor Bursztyn", - "Ryan Rossi", + "Ryan A. Rossi", "Somdeb Sarkhel", "Chao Zhang" ], @@ -9167,7 +9167,7 @@ "id": 19229, "title": "Compressing LLMs: The Truth is Rarely Pure and Never Simple", "authors": [ - "AJAY JAISWAL", + "AJAY KUMAR JAISWAL", "Zhe Gan", "Xianzhi Du", "Bowen Zhang", @@ -9295,18 +9295,18 @@ "id": 19562, "title": "RA-DIT: Retrieval-Augmented Dual Instruction Tuning", "authors": [ - "Xi Lin", + "Xi Victoria Lin", "Xilun Chen", "Mingda Chen", "Weijia Shi", "Maria Lomeli", "Richard James", "Pedro Rodriguez", - "Jacob D Kahn", + "Jacob Kahn", "Gergely Szilvasy", "Mike Lewis", "Luke Zettlemoyer", - "Scott Yih" + "Wen-tau Yih" ], "abstract": "Retrieval-augmented language models (RALMs) improve performance by accessing long-tail and up-to-date knowledge from external data stores, but are challenging to build. Existing approaches require either expensive retrieval-specific modifications to LM pre-training or use post-hoc integration of the data store that leads to suboptimal performance. We introduce Retrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuning methodology that provides a third option by retrofitting any LLM with retrieval capabilities. Our approach operates in two distinct fine-tuning steps: (1) one updates a pre-trained LM to better use retrieved information, while (2) the other updates the retriever to return more relevant results, as preferred by the LM. By fine-tuning over tasks that require both knowledge utilization and contextual awareness, we demonstrate that each stage yields significant performance improvements, and using both leads to additional gains. Our best model, RA-DIT 65B, achieves state-of-the-art performance across a range of knowledge-intensive zero- and few-shot learning benchmarks, significantly outperforming existing in-context RALM approaches by up to +8.9% in 0-shot setting and +1.4% in 5-shot setting on average.", "type": "Poster", @@ -9340,7 +9340,7 @@ }, { "id": 19219, - "title": "RealChat-1M: A Large-Scale Real-World LLM Conversation Dataset", + "title": "LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset", "authors": [ "Lianmin Zheng", "Wei-Lin Chiang", @@ -9352,7 +9352,7 @@ "Zhuohan Li", "Zi Lin", "Eric Xing", - "Joseph E Gonzalez", + "Joseph E. Gonzalez", "Ion Stoica", "Hao Zhang" ], @@ -9468,11 +9468,11 @@ "Kaihang Pan", "Zhiqi Ge", "Minghe Gao", - "Hanwang Zhang", "Wei Ji", "Wenqiao Zhang", "Tat-Seng Chua", "Siliang Tang", + "Hanwang Zhang", "Yueting Zhuang" ], "abstract": "Recent advancements in Multimodal Large Language Models (MLLMs) have been utilizing Visual Prompt Generators (VPGs) to convert visual features into tokens that LLMs can recognize. This is achieved by training the VPGs on millions of image-caption pairs, where the VPG-generated tokens of images are fed into a frozen LLM to generate the corresponding captions. However, this image-captioning based training objective inherently biases the VPG to concentrate solely on the primary visual contents sufficient for caption generation, often neglecting other visual details. This shortcoming results in MLLMs\u2019 underperformance in comprehending demonstrative instructions consisting of multiple, interleaved, and multimodal instructions that demonstrate the required context to complete a task. To address this issue, we introduce a generic and lightweight Visual Prompt Generator Complete module (VPG-C), which can infer and complete the missing details essential for comprehending demonstrative instructions. Further, we propose a synthetic discriminative training strategy to fine-tune VPG-C, eliminating the need for supervised demonstrative instructions. As for evaluation, we build DEMON, a comprehensive benchmark for demonstrative instruction understanding. Synthetically trained with the proposed strategy, VPG-C achieves significantly stronger zero-shot performance across all tasks of DEMON. Further evaluation on the MME and OwlEval benchmarks also demonstrate the superiority of VPG-C. The anonymous project is available at https://anonymous.4open.science/r/Cheetah-45B4.", @@ -9490,10 +9490,10 @@ "id": 19211, "title": "Manifold Diffusion Fields", "authors": [ - "Ahmed Elhag", + "Ahmed A. A. Elhag", "Yuyang Wang", - "Joshua Susskind", - "MIGUEL ANGEL BAUTISTA" + "Joshua M. Susskind", + "Miguel \u00c1ngel Bautista" ], "abstract": "We present Manifold Diffusion Fields (MDF), an approach that unlocks learning of diffusion models of data in general non-euclidean geometries. Leveraging insights from spectral geometry analysis, we define an intrinsic coordinate system on the manifold via the eigen-functions of the Laplace-Beltrami Operator. MDF represents functions using an explicit parametrization formed by a set of multiple input-output pairs. Our approach allows to sample continuous functions on manifolds and is invariant with respect to rigid and isometric transformations of the manifold. In addition, we show that MDF generalizes to the case where the training set contains functions on different manifolds. Empirical results on multiple datasets and manifolds including challenging scientific problems like weather prediction or molecular conformation show that MDF can capture distributions of such functions with better diversity and fidelity than previous approaches.", "type": "Poster", @@ -9556,8 +9556,8 @@ "Bo Li", "Xiaowen Jiang", "Mikkel N. Schmidt", - "Tommy Alstr\u00f8m", - "Sebastian Stich" + "Tommy Sonne Alstr\u00f8m", + "Sebastian U Stich" ], "abstract": "Gradient clipping is key mechanism that is essential to differentially private training techniques in Federated learning. Two popular strategies are per-sample clipping, which clips the mini-batch gradient, and per-update clipping, which clips each user's model update. However, there has not been a thorough theoretical analysis of these two clipping methods.In this work, we rigorously analyze the impact of these two clipping techniques on the convergence of a popular federated learning algorithm FedAvg under standard stochastic noise and gradient dissimilarity assumptions. We provide a convergence guarantee given any arbitrary clipping threshold. Specifically, we show that per-sample clipping is guaranteed to converge to the neighborhood of the stationary point, with the size dependent on the stochastic noise, gradient dissimilarity, and clipping threshold. In contrast, the convergence to the stationary point can be guaranteed with a sufficiently small stepsize in per-update clipping at the cost of more communication rounds. We further provide insights into understanding the impact of the improved convergence analysis in the differentially private setting.", "type": "Poster", @@ -9572,8 +9572,8 @@ "id": 19207, "title": "PB-LLM: Partially Binarized Large Language Models", "authors": [ - "Yuzhang Shang", "Zhihang Yuan", + "Yuzhang Shang", "Zhen Dong" ], "abstract": "This paper explores network binarization, a radical form of quantization, compressing model weights to a single bit, specifically for Large Language Models (LLMs) compression. Due to previous binarization methods collapsing LLMs, we propose a novel approach, Partially-Binarized LLM (PB-LLM), which can achieve extreme low-bit quantization while maintaining the linguistic reasoning capacity of quantized LLMs. Specifically, our exploration first uncovers the ineffectiveness of na\u00efve applications of existing binarization algorithms and highlights the imperative role of salient weights in achieving low-bit quantization. Thus, PB-LLM filters a small ratio of salient weights during binarization, allocating them to higher-bit storage, i.e., partially-binarization. PB-LLM is extended to recover the capacities of quantized LMMs, by analyzing from the perspective of post-training quantization (PTQ) and quantization-aware training (QAT). Under PTQ, combining the concepts from GPTQ, we reconstruct the binarized weight matrix guided by the Hessian matrix and successfully recover the reasoning capacity of PB-LLM in low-bit. Under QAT, we freeze the salient weights during training, explore the derivation of optimal scaling factors crucial for minimizing the quantization error, and propose a scaling mechanism based on this derived scaling strategy for residual binarized weights. Those explorations and the developed methodologies significantly contribute to rejuvenating the performance of low-bit quantized LLMs and present substantial advancements in the field of network binarization for LLMs.", @@ -9627,7 +9627,7 @@ "id": 19205, "title": "A Fast and Provable Algorithm for Sparse Phase Retrieval", "authors": [ - "Jian-Feng Cai", + "Jian-Feng CAI", "Yu Long", "Ruixue WEN", "Jiaxi Ying" @@ -9650,7 +9650,7 @@ "Alireza Heidari", "Mohammad Hosein Movasaghinia", "Abolfazl Motahari", - "Babak HosseinKhalaj" + "Babak Khalaj" ], "abstract": "We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in $\\mathbb{R}^d$, where in addition to the $m$ independent and labeled samples from the true distribution, a set of $n$ (usually with $n\\gg m$) out of domain and unlabeled samples are gievn as well. Using only the labeled data, it is known that the generalization error can be bounded by $\\propto\\left(d/m\\right)^{1/2}$. However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the \"cluster assumption\", and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets.", "type": "Spotlight Poster", @@ -9687,7 +9687,7 @@ "Zhongwang Zhang", "Yuqing Li", "Tao Luo", - "Zhiqin Xu" + "Zhi-Qin John Xu" ], "abstract": "Dropout is a widely utilized regularization technique in the training of neural networks, nevertheless, its underlying mechanism and impact on achieving good generalization abilities remain to be further understood. In this work, we start by undertaking a rigorous theoretical derivation of the stochastic modified equations, with the primary aim of providing an effective approximation for the discrete iterative process of dropout. Meanwhile, we experimentally verify SDE's ability to approximate dropout under a wider range of settings. Subsequently, we empirically delve into the intricate mechanisms by which dropout facilitates the identification of flatter minima. This exploration is conducted through intuitive approximations, exploiting the structural analogies inherent in the Hessian of loss landscape and the covariance of dropout. Our empirical findings substantiate the ubiquitous presence of the Hessian-variance alignment relation throughout the training process of dropout.", "type": "Poster", @@ -9705,7 +9705,7 @@ "Suhyeon Lee", "Won Jun Kim", "Jinho Chang", - "Jong Ye" + "Jong Chul Ye" ], "abstract": "Following the impressive development of LLMs, vision-language alignment in LLMs is actively being researched to enable multimodal reasoning and visual input/output. This direction of research is particularly relevant to medical imaging because accurate medical image analysis and generation consist of a combination of reasoning based on visual features and prior knowledge. Many recent works have focused on training adapter networks that serve as an information bridge between image processing (encoding or generating) networks and LLMs; but presumably, in order to achieve maximum reasoning potential of LLMs on visual information as well, visual and language features should be allowed to interact more freely. This is especially important in the medical domain because understanding and generating medical images such as chest X-rays (CXR) require not only accurate visual and language-based reasoning but also a more intimate mapping between the two modalities. Thus, taking inspiration from previous work on the transformer and VQ-GAN combination for bidirectional image and text generation, we build upon this approach and develop a method for instruction-tuning an LLM pre-trained only on text to gain vision-language capabilities for medical images. Specifically, we leverage a pretrained LLM\u2019s existing question-answering and instruction-following abilities to teach it to understand visual inputs by instructing it to answer questions about image inputs and, symmetrically, output both text and image responses appropriate to a given query by tuning the LLM with diverse tasks that encompass image-based text-generation and text-based image-generation. We show that our LLM-CXR trained in this approach shows better image-text alignment in both CXR understanding and generation tasks while being smaller in size compared to previously developed models that perform a narrower range of tasks.", "type": "Poster", @@ -9720,9 +9720,9 @@ }, { "id": 19197, - "title": "Quantifying Interactions in Semi-supervised Multimodal Learning: Guarantees and Applications", + "title": "Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications", "authors": [ - "Paul Liang", + "Paul Pu Liang", "Chun Kai Ling", "Yun Cheng", "Alexander Obolenskiy", @@ -9769,7 +9769,7 @@ "Varun Chandrasekaran", "Jerry Li", "Mert Yuksekgonul", - "Rahee Peshawaria", + "Rahee Ghosh Peshawaria", "Ranjita Naik", "Besmira Nushi" ], @@ -9828,7 +9828,7 @@ "authors": [ "Song Xia", "Yi Yu", - "Jiang Xudong", + "Xudong Jiang", "Henghui Ding" ], "abstract": "Randomized Smoothing (RS) has been proven a promising method for endowing an arbitrary image classifier with certified robustness. However, the substantial uncertainty inherent in the high-dimensional isotropic Gaussian noise imposes the curse of dimensionality on RS. Specifically, the upper bound of ${\\ell_2}$ certified robustness radius provided by RS exhibits a diminishing trend with the expansion of the input dimension $d$, proportionally decreasing at a rate of $1/\\sqrt{d}$. This paper explores the feasibility of providing ${\\ell_2}$ certified robustness for high-dimensional input through the utilization of dual smoothing in the lower-dimensional space. The proposed Dual Randomized Smoothing (DRS) down-samples the input image into two sub-images and smooths the two sub-images in lower dimensions. Theoretically, we prove that DRS guarantees a tight ${\\ell_2}$ certified robustness radius for the original input and reveal that DRS attains a superior upper bound on the ${\\ell_2}$ robustness radius, which decreases proportionally at a rate of $(1/\\sqrt m + 1/\\sqrt n )$ with $m+n=d$. Extensive experiments demonstrate the generalizability and effectiveness of DRS, which exhibits a notable capability to integrate with established methodologies, yielding substantial improvements in both accuracy and ${\\ell_2}$ certified robustness baselines of RS on the CIFAR-10 and ImageNet datasets.", @@ -9864,23 +9864,23 @@ "authors": [ "Cl\u00e9ment Bonnet", "Daniel Luo", - "Donal Byrne", + "Donal John Byrne", "Shikha Surana", + "Sasha Abramowitz", "Paul Duckworth", "Vincent Coyette", - "Laurence Midgley", - "Sasha Abramowitz", + "Laurence Illing Midgley", "Elshadai Tegegn", "Tristan Kalloniatis", "Omayma Mahjoub", "Matthew Macfarlane", - "Andries Smit", + "Andries Petrus Smit", "Nathan Grinsztajn", "Raphael Boige", - "Cemlyn Waters", - "Mohamed Ali Mimouni", - "Ulrich Mbou Sob", - "Ruan de Kock", + "Cemlyn Neil Waters", + "Mohamed Ali Ali Mimouni", + "Ulrich Armel Mbou Sob", + "Ruan John de Kock", "Siddarth Singh", "Daniel Furelos-Blanco", "Victor Le", @@ -10012,11 +10012,11 @@ }, { "id": 19178, - "title": "Channel Vision Transformers: An Image Is Worth C x 16 x 16 Words", + "title": "Channel Vision Transformers: An Image Is Worth 1 x 16 x 16 Words", "authors": [ "Yujia Bao", "Srinivasan Sivanandan", - "THEOFANIS KARALETSOS" + "Theofanis Karaletsos" ], "abstract": "Vision Transformer (ViT) has emerged as a powerful architecture in the realm of modern computer vision. However, its application in certain imaging fields, such as microscopy and satellite imaging, presents unique challenges. In these domains, images often contain multiple channels, each carrying semantically distinct and independent information. Furthermore, the model must demonstrate robustness to sparsity in input channels, as they may not be densely available during training or testing. In this paper, we propose a modification to the ViT architecture that enhances reasoning across the input channels and introduce Hierarchical Channel Sampling (HCS) as an additional regularization technique to ensure robustness when only partial channels are presented during test time. Our proposed model, ChannelViT, constructs patch tokens independently from each input channel and utilizes a learnable channel embedding that is added to the patch tokens, similar to positional embeddings. We evaluate the performance of ChannelViT on ImageNet, JUMP-CP (microscopy cell imaging), and So2Sat (satellite imaging). Our results show that ChannelViT outperforms ViT on classification tasks and generalizes well, even when a subset of input channels is used during testing. Across our experiments, HCS proves to be a powerful regularizer, independent of the architecture employed, suggesting itself as a straightforward technique for robust ViT training. Lastly, we find that ChannelViT generalizes effectively even when there is limited access to all channels during training, highlighting its potential for multi-channel imaging under real-world conditions with sparse sensors.", "type": "Poster", @@ -10119,7 +10119,7 @@ }, { "id": 19172, - "title": "Long-range Neural Atom Learning for Molecular Graphs", + "title": "Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel", "authors": [ "Xuan Li", "Zhanke Zhou", @@ -10141,7 +10141,7 @@ "id": 19489, "title": "Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning", "authors": [ - "Jacob Springer", + "Jacob Mitchell Springer", "Vaishnavh Nagarajan", "Aditi Raghunathan" ], @@ -10160,8 +10160,8 @@ "authors": [ "Xinyao Fan", "Yueying Wu", - "XU", - "Yu-Hao Huang", + "Chang Xu", + "Yuhao Huang", "Weiqing Liu", "Jiang Bian" ], @@ -10221,7 +10221,7 @@ "Rui Zhou", "Running Zhao", "Zhihan JIANG", - "Edith Ngai" + "Edith C. H. Ngai" ], "abstract": "Federated learning (FL) inevitably confronts the challenge of system heterogeneity in practical scenarios. To enhance the capabilities of most model-homogeneous FL methods in handling system heterogeneity, we propose a training scheme that can extend their capabilities to cope with this challenge. In this paper, we commence our study with a detailed exploration of homogeneous and heterogeneous FL settings and discover three key observations: (1) a positive correlation between client performance and layer similarities, (2) higher similarities in the shallow layers in contrast to the deep layers, and (3) the smoother gradients distributions indicate the higher layer similarities. Building upon these observations, we propose InCo Aggregation that leverages internal cross-layer gradients, a mixture of gradients from shallow and deep layers within a server model, to augment the similarity in the deep layers without requiring additional communication between clients. Furthermore, our methods can be tailored to accommodate model-homogeneous FL methods such as FedAvg, FedProx, FedNova, Scaffold, and MOON, to expand their capabilities to handle the system heterogeneity. Copious experimental results validate the effectiveness of InCo Aggregation, spotlighting internal cross-layer gradients as a promising avenue to enhance the performance in heterogeneous FL.", "type": "Poster", @@ -10294,10 +10294,10 @@ "id": 19742, "title": "Topological data analysis on noisy quantum computers", "authors": [ - "Ismail Akhalwaya", + "Ismail Yunus Akhalwaya", "Shashanka Ubaru", - "Kenneth Clarkson", - "Mark Squillante", + "Kenneth L. Clarkson", + "Mark S. Squillante", "Vishnu Jejjala", "Yang-Hui He", "Kugendran Naidoo", @@ -10342,7 +10342,7 @@ "title": "Butterfly Effects of SGD Noise: Error Amplification in Behavior Cloning and Autoregression", "authors": [ "Adam Block", - "Dylan Foster", + "Dylan J Foster", "Akshay Krishnamurthy", "Max Simchowitz", "Cyril Zhang" @@ -10360,9 +10360,9 @@ "id": 19162, "title": "Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants", "authors": [ - "Peter Richtarik", + "Peter Richt\u00e1rik", "Elnur Gasanov", - "Konstantin Burlachenko" + "Konstantin Pavlovich Burlachenko" ], "abstract": "Error feedback (EF) is a highly popular and immensely effective mechanism for fixing convergence issues which arise in distributed training methods (such as distributed GD or SGD) when these are enhanced with greedy communication compression techniques such as Top-k. While EF was proposed almost a decade ago (Seide et al, 2014), and despite concentrated effort by the community to advance the theoretical understanding of this mechanism, there is still a lot to explore. In this work we study a modern form of error feedback called EF21 (Richt\u00e1rik et al, 2021) which offers the currently best-known theoretical guarantees, under the weakest assumptions, and also works well in practice. In particular, while the theoretical communication complexity of EF21 depends on the {\\em quadratic mean} of certain smoothness parameters, we improve this dependence to their {\\em arithmetic mean}, which is always smaller, and can be substantially smaller, especially in heterogeneous data regimes. We take the reader on a journey of our discovery process. Starting with the idea of applying EF21 to an equivalent reformulation of the underlying problem which (unfortunately) requires (often impractical) machine cloning, we continue to the discovery of a new {\\em weighted} version of EF21 which can (fortunately) be executed without any cloning, and finally circle back to an improved analysis of the original EF21 method. While this development applies to the simplest form of EF21, our approach naturally extends to more elaborate variants involving stochastic gradients and partial participation. Further, our technique improves the best-known theory of EF21 in the ``rare features'' regime (Richt\u00e1rik et al, 2023). Finally, we validate our theoretical findings with suitable experiments.", "type": "Poster", @@ -10395,13 +10395,13 @@ }, { "id": 19161, - "title": "Translating Labels to Solve Annotation Mismatches Across Object Detection Datasets", + "title": "Transferring Labels to Solve Annotation Mismatches Across Object Detection Datasets", "authors": [ "Yuan-Hong Liao", "David Acuna", "Rafid Mahmood", "James Lucas", - "Viraj Prabhu", + "Viraj Uday Prabhu", "Sanja Fidler" ], "abstract": "In object detection, varying annotation protocols across datasets can result in annotation mismatches, leading to inconsistent class labels and bounding regions. Addressing these mismatches typically involves manually identifying common trends and fixing the corresponding bounding boxes and class labels. To alleviate this laborious process, we introduce the label translation problem in object detection. Here, the goal is to translate bounding boxes from one or more source datasets to match the annotation style of a target dataset. We propose a data-centric approach, Label-Guided Pseudo-Labeling (LGPL), that improves downstream detectors in a manner agnostic to the detector learning algorithms and model architectures. Validating across four object detection scenarios, defined over seven different datasets and three different architectures, we show that translating labels for a target task via LGPL consistently improves the downstream detection in every setting, on average by $1.88$ mAP and $2.65$ AP$^{75}$. Most importantly, we find that when training with multiple labeled datasets, carefully addressing annotation mismatches with LGPL alone can improve downstream object detection better than off-the-shelf domain adaptation techniques that align only image features.", @@ -10417,12 +10417,12 @@ "id": 19159, "title": "Effective pruning of web-scale datasets based on complexity of concept clusters", "authors": [ - "Amro Kamal", + "Amro Kamal Mohamed Abbas", "Evgenia Rusak", "Kushal Tirumala", "Wieland Brendel", "Kamalika Chaudhuri", - "Ari Morcos" + "Ari S. Morcos" ], "abstract": "Utilizing massive web-scale datasets has led to unprecedented performance gains in machine learning models, but also imposes outlandish compute requirements for their training. In order to improve training and data efficiency, we here push the limits of pruning large-scale multimodal datasets for training CLIP-style models. Today\u2019s most effective pruning method on ImageNet clusters data samples into separate concepts according to their embedding and prunes away the most proto- typical samples. We scale this approach to LAION and improve it by noting that the pruning rate should be concept-specific and adapted to the complexity of the concept. Using a simple and intuitive complexity measure, we are able to reduce the training cost to a quarter of regular training. More specifically, we are able to outperform the LAION-trained OpenCLIP-ViT-B/32 model on ImageNet zero-shot accuracy by 1.1p.p. while only using 27.7% of the data and training compute. On the DataComp Medium benchmark, we achieve a new state-of-the-art ImageNet zero-shot accuracy and a competitive average zero-shot accuracy on 38 evaluation tasks.", "type": "Poster", @@ -10461,7 +10461,7 @@ "Jeongyeol Kwon", "Dohyun Kwon", "Stephen Wright", - "Robert Nowak" + "Robert D Nowak" ], "abstract": "In this work, we study first-order algorithms for solving Bilevel Optimization (BO) where the objective functions are smooth but possibly nonconvex in both levels and the variables are restricted to closed convex sets. As a first step, we study the landscape of BO through the lens of penalty methods, in which the upper- and lower-level objectives are combined in a weighted sum with penalty parameter $\\sigma > 0$. In particular, we establish a strong connection between the penalty function and the hyper-objective by explicitly characterizing the conditions under which the values and derivatives of the two must be $O(\\sigma)$-close. A by-product of our analysis is the explicit formula for the gradient of hyper-objective when the lower-level problem has multiple solutions under minimal conditions, which could be of independent interest. Next, viewing the penalty formulation as $O(\\sigma)$-approximation of the original BO, we propose first-order algorithms that find an $\\epsilon$-stationary solution by optimizing the penalty formulation with $\\sigma = O(\\epsilon)$. When the perturbed lower-level problem uniformly satisfies the {\\it small-error} proximal error-bound (EB) condition, we propose a first-order algorithm that converges to an $\\epsilon$-stationary point of the penalty function using in total $O(\\epsilon^{-7})$ accesses to first-order stochastic gradient oracles. Under an additional assumption on stochastic oracles, we show that the algorithm can be implemented in a fully {\\it single-loop} manner, {\\it i.e.,} with $O(1)$ samples per iteration, and achieves the improved oracle-complexity of $O(\\epsilon^{-5})$.", "type": "Spotlight Poster", @@ -10514,7 +10514,7 @@ "authors": [ "Nico Daheim", "Thomas M\u00f6llenhoff", - "Edoardo M. Ponti", + "Edoardo Ponti", "Iryna Gurevych", "Mohammad Emtiyaz Khan" ], @@ -10534,8 +10534,8 @@ "Mauricio Tec", "Ana Trisovic", "Michelle Audirac", - "Sophie Woodward", - "Jie Hu", + "Sophie Mirabai Woodward", + "Jie Kate Hu", "Naeem Khoshnevis", "Francesca Dominici" ], @@ -10574,7 +10574,7 @@ "YiWen Chen", "Yang Liu", "Jin Wang", - "QING HE", + "Qing He", "Minhao Cheng", "Xiang Ao" ], @@ -10616,7 +10616,7 @@ "authors": [ "Mengkang Hu", "Yao Mu", - "Xinmiao Yu", + "Xinmiao Chelsey Yu", "Mingyu Ding", "Shiguang Wu", "Wenqi Shao", @@ -10717,7 +10717,7 @@ "id": 19145, "title": "A Plug-and-Play Image Registration Network", "authors": [ - "JUNHAO HU", + "Junhao Hu", "Weijie Gan", "Zhixin Sun", "Hongyu An", @@ -10789,14 +10789,14 @@ }, { "id": 19138, - "title": "Graphpulse: Topological representations for temporal graph property prediction", + "title": "GraphPulse: Topological representations for temporal graph property prediction", "authors": [ "Kiarash Shamsi", "Farimah Poursafaei", - "Shenyang(Andy) Huang", - "Tran Gia Bao Ngo", + "Shenyang Huang", + "Bao Tran Gia Ngo", "Baris Coskunuzer", - "Cuneyt Akcora" + "Cuneyt Gurcan Akcora" ], "abstract": "Many real-world networks evolve over time, and predicting the evolution of such networks remains a challenging task. Graph Neural Networks (GNNs) have shown empirical success for learning on static graphs, but they lack the ability to effectively learn from nodes and edges with different timestamps. Consequently, the prediction of future properties in temporal graphs remains a relatively under-explored area.In this paper, we aim to bridge this gap by introducing a principled framework, named GraphPulse. The framework combines two important techniques for the analysis of temporal graphs within a Newtonian framework. First, we employ the Mapper method, a key tool in topological data analysis, to extract essential clustering information from graph nodes. Next, we harness the sequential modeling capabilities of Recurrent Neural Networks (RNNs) for temporal reasoning regarding the graph's evolution. Through extensive experimentation, we demonstrate that our model enhances the ROC-AUC metric by 10.2\\% in comparison to the top-performing state-of-the-art method across various temporal networks. We provide the implementation of GraphPulse at https://anonymous.4open.science/r/Graph_Pulse", "type": "Poster", @@ -10809,7 +10809,7 @@ }, { "id": 19136, - "title": "Improving Out-of-Domain Generalization with Domain Relations", + "title": "Improving Domain Generalization with Domain Relations", "authors": [ "Huaxiu Yao", "Xinyu Yang", @@ -10832,7 +10832,7 @@ "title": "The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing", "authors": [ "Shen Nie", - "Hanzhong Guo", + "Hanzhong Allan Guo", "Cheng Lu", "Yuhao Zhou", "Chenyu Zheng", @@ -10853,7 +10853,7 @@ "authors": [ "Yassine ABBAHADDOU", "Sofiane ENNADIR", - "Johannes Lutzeyer", + "Johannes F. Lutzeyer", "Michalis Vazirgiannis", "Henrik Bostr\u00f6m" ], @@ -10895,7 +10895,7 @@ "Annan Yu", "Arnur Nigmetov", "Dmitriy Morozov", - "Michael W Mahoney", + "Michael W. Mahoney", "N. Benjamin Erichson" ], "abstract": "State-space models (SSMs) have recently emerged as a framework for learning long-range sequence tasks. An example is the structured state-space sequence (S4) layer, which uses the diagonal-plus-low-rank structure of the HiPPO initialization framework. However, the complicated structure of the S4 layer poses challenges; and, in an effort to address these challenges, models such as S4D and S5 have considered a purely diagonal structure. This choice simplifies the implementation, improves computational efficiency, and allows channel communication. However, diagonalizing the HiPPO framework is itself an ill-posed problem. In this paper, we propose a general solution for this and related ill-posed diagonalization problems in machine learning. We introduce a generic, backward-stable ``perturb-then-diagonalize'' (PTD) methodology, which is based on the pseudospectral theory of non-normal operators, and which may be interpreted as the approximate diagonalization of the non-normal matrices defining SSMs. Based on this, we introduce the S4-PTD and S5-PTD models. Through theoretical analysis of the transfer functions of different initialization schemes, we demonstrate that the S4-PTD/S5-PTD initialization strongly converges to the HiPPO framework, while the S4D/S5 initialization only achieves weak convergences. As a result, our new models show resilience to Fourier-mode noise-perturbed inputs, a crucial property not achieved by the S4D/S5 models. In addition to improved robustness, our S5-PTD model averages 87.6% accuracy on the Long-Range Arena benchmark, demonstrating that the PTD methodology helps to improve the accuracy of deep learning models.", @@ -10976,7 +10976,7 @@ "Adam Misik", "Yuankai Wu", "Constantin Patsch", - "Fabian Seguel", + "Fabian Esteban Seguel", "Eckehard Steinbach" ], "abstract": "Recently, SO(3)-equivariant methods have been explored for 3D reconstruction via Scan-to-CAD.Despite significant advancements attributed to the unique characteristics of 3D data, existing SO(3)-equivariant approaches often fall short in seamlessly integrating local and global contextual information in a widely generalizable manner.Our contributions in this paper are threefold.First, we introduce Spherical Patch Fields, a representation technique designed for patch-wise, SO(3)-equivariant 3D point clouds, anchored theoretically on the principles of Spherical Gaussians.Second, we present the Patch Gaussian Layer, designed for the adaptive extraction of local and global contextual information from resizable point cloud patches.Culminating our contributions, we present Learnable Spherical Patch Fields (DeepSPF) \u2013 a versatile and easily integrable backbone suitable for instance-based point networks.Through rigorous evaluations, we demonstrate significant enhancements in Scan-to-CAD performance for point cloud registration, retrieval, and completion: a significant reduction in the rotation error of existing registration methods, an improvement of up to 17\\% in the Top-1 error for retrieval tasks, and a notable reduction of up to 30\\% in the Chamfer Distance for completion models, all attributable to the incorporation of DeepSPF.", @@ -11030,7 +11030,7 @@ "Inbal Leibovitch", "Guy Tevet", "Moab Arar", - "Amit Bermano", + "Amit Haim Bermano", "Daniel Cohen-Or" ], "abstract": "Synthesizing realistic animations of humans, animals, and even imaginary creatures, has long been a goal for artists and computer graphics professionals. Compared to the imaging domain, which is rich with large available datasets, the number of data instances for the motion domain is limited, particularly for the animation of animals and exotic creatures (e.g., dragons), which have unique skeletons and motion patterns. In this work, we introduce SinMDM, a Single Motion Diffusion Model. It is designed to learn the internal motifs of a single motion sequence with arbitrary topology and synthesize a variety of motions of arbitrary length that remain faithful to the learned motifs. We harness the power of diffusion models and present a denoising network explicitly designed for the task of learning from a single input motion. SinMDM is crafted as a lightweight architecture, which avoids overfitting by using a shallow network with local attention layers that narrow the receptive field and encourage motion diversity. Our work applies to multiple contexts, including spatial and temporal in-betweening, motion expansion, style transfer, and crowd animation. Our results show that SinMDM outperforms existing methods both qualitatively and quantitatively. Moreover, while prior network-based approaches require additional training for different applications, SinMDM supports these applications during inference. Our code is included as supplementary material and will be published.", @@ -11044,12 +11044,12 @@ }, { "id": 19509, - "title": "Fusion is Not Enough: Single Modal Attack on Fusion Models for 3D Object Detection", + "title": "Fusion Is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection", "authors": [ "Zhiyuan Cheng", "Hongjun Choi", "Shiwei Feng", - "James Liang", + "James Chenhao Liang", "Guanhong Tao", "Dongfang Liu", "Michael Zuzak", @@ -11070,7 +11070,7 @@ "authors": [ "Hyungjin Chung", "Suhyeon Lee", - "Jong Ye" + "Jong Chul Ye" ], "abstract": "Krylov subspace, which is generated by multiplying a given vector by the matrix of a linear transformation and its successive powers, has been extensively studied in classical optimization literature to design algorithms that converge quickly for large linear inverse problems. For example, the conjugate gradient method (CG), one of the most popular Krylov subspace methods, is based on the idea of minimizing the residual error in the Krylov subspace. However, with the recent advancement of high-performance diffusion solvers for inverse problems, it is not clear how classical wisdom can be synergistically combined with modern diffusion models. In this study, we propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods. Specifically, we prove that if the tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG initialized with the denoised data ensures the data consistency update to remain in the tangent space. This negates the need to compute the manifold-constrained gradient (MCG), leading to a more efficient diffusion sampling method. Our method is applicable regardless of the parametrization and setting (i.e., VE, VP). Notably, we achieve state-of-the-art reconstruction quality on challenging real-world medical inverse imaging problems, including multi-coil MRI reconstruction and 3D CT reconstruction. Moreover, our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method. Code is available at https://github.com/HJ-harry/DDS", "type": "Poster", @@ -11087,7 +11087,7 @@ "id": 19120, "title": "Adversarial Imitation Learning via Boosting", "authors": [ - "Jonathan Chang", + "Jonathan Daniel Chang", "Dhruv Sreenivas", "Yingbing Huang", "Kiant\u00e9 Brantley", @@ -11110,7 +11110,7 @@ "Nicholas Monath", "Ahmad Beirami", "Rahul Kidambi", - "Kumar Dubey", + "Kumar Avinava Dubey", "Amr Ahmed", "Snigdha Chaturvedi" ], @@ -11130,7 +11130,7 @@ "Yi Heng Lim", "Qi Zhu", "Joshua Selfridge", - "Muhammad Firmansyah" + "Muhammad Firmansyah Kasim" ], "abstract": "Sequential models, such as Recurrent Neural Networks and Neural Ordinary Differential Equations, have long suffered from slow training due to their inherent sequential nature.For many years this bottleneck has persisted, as many thought sequential models could not be parallelized.We challenge this long-held belief with our parallel algorithm that accelerates GPU evaluation of sequential models by up to 3 orders of magnitude faster without compromising output accuracy.The algorithm does not need any special structure in the sequential models' architecture, making it applicable to a wide range of architectures.Using our method, training sequential models can be more than 10 times faster than the common sequential method without any meaningful difference in the training results.Leveraging this accelerated training, we discovered the efficacy of the Gated Recurrent Unit in a long time series classification problem with 17k time samples.By overcoming the training bottleneck, our work serves as the first step to unlock the potential of non-linear sequential models for long sequence problems.", "type": "Poster", @@ -11292,7 +11292,7 @@ "title": "Prompt Gradient Projection for Continual Learning", "authors": [ "Jingyang Qiao", - "Zhizhong Zhang", + "zhizhong zhang", "Xin Tan", "Chengwei Chen", "Yanyun Qu", @@ -11372,7 +11372,7 @@ "Sichao Li", "Rong Wang", "Quanling Deng", - "Amanda Barnard" + "Amanda S Barnard" ], "abstract": "Interactions among features are central to understanding the behavior of machine learning models. Recent research has made significant strides in detecting and quantifying feature interactions in single predictive models. However, we argue that the feature interactions extracted from a single pre-specified model may not be trustworthy since: *a well-trained predictive model may not preserve the true feature interactions and there exist multiple well-performing predictive models that differ in feature interaction strengths*. Thus, we recommend exploring feature interaction strengths in a model class of approximately equally accurate predictive models. In this work, we introduce the feature interaction score (FIS) in the context of a Rashomon set, representing a collection of models that achieve similar accuracy on a given task. We propose a general and practical algorithm to calculate the FIS in the model class. We demonstrate the properties of the FIS via synthetic data and draw connections to other areas of statistics. Additionally, we introduce a Halo plot for visualizing the feature interaction variance in high-dimensional space and a swarm plot for analyzing FIS in a Rashomon set. Experiments with recidivism prediction and image classification illustrate how feature interactions can vary dramatically in importance for similarly accurate predictive models. Our results suggest that the proposed FIS can provide valuable insights into the nature of feature interactions in machine learning models.", "type": "Poster", @@ -11388,7 +11388,7 @@ "title": "Universal Jailbreak Backdoors from Poisoned Human Feedback", "authors": [ "Javier Rando", - "Florian Tramer" + "Florian Tram\u00e8r" ], "abstract": "Reinforcement Learning from Human Feedback (RLHF) is used to align large language models to produce helpful and harmless responses. Yet, these models can be jailbroken by finding adversarial prompts that revert the model to its unaligned behavior. In this paper, we consider a new threat where an attacker poisons the RLHF data to embed a jailbreak trigger into the model as a backdoor. The trigger then acts like a universal sudo command, enabling arbitrary harmful responses without the need to search for an adversarial prompt. Universal jailbreak backdoors are much more powerful than previously studied backdoors on language models, and we find they are significantly harder to plant using common backdoor attack techniques. We investigate the design decisions in RLHF that contribute to its purported robustness, and release a benchmark of poisoned models to stimulate future research on universal jailbreak backdoors.", "type": "Poster", @@ -11420,7 +11420,7 @@ "title": "Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy", "authors": [ "Simon Ging", - "Maria A. Bravo", + "Maria Alejandra Bravo", "Thomas Brox" ], "abstract": "The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models\u2019 capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.", @@ -11506,7 +11506,7 @@ }, { "id": 19095, - "title": "Leveraging Previous Tasks in Optimizing Risk Measures with Gaussian Processes", + "title": "Meta-VBO: Utilizing Prior Tasks in Optimizing Risk Measures with Gaussian Processes", "authors": [ "Quoc Phong Nguyen", "Bryan Kian Hsiang Low", @@ -11548,7 +11548,7 @@ "Jiaming Shan", "Qinhong Zhou", "Yilun Du", - "Joshua B Tenenbaum", + "Joshua B. Tenenbaum", "Tianmin Shu", "Chuang Gan" ], @@ -11569,7 +11569,7 @@ "Rafael Rafailov", "Archit Sharma", "Chelsea Finn", - "Christopher Manning" + "Christopher D Manning" ], "abstract": "Widely used language models (LMs) are typically built by scaling up a two-stage training pipeline: a pre-training stage that uses a very large, diverse dataset of text and a fine-tuning (sometimes, 'alignment') stage using more targeted examples of specific behaviors and/or human preferences. While it has been hypothesized that knowledge and skills come from pre-training, and fine-tuning mostly filters this knowledge and skillset, this intuition has not been rigorously tested. In this paper, we test this hypothesis with a novel methodology for scaling these two stages independently, essentially asking, *What would happen if we combined the knowledge learned by a large model during pre-training with the knowledge learned by a small model during fine-tuning (or vice versa)?* Using an RL-based framework derived from recent developments in learning from human preferences, we introduce *emulated fine-tuning (EFT)*, a principled and practical method for sampling from a distribution that approximates the result of pre-training and fine-tuning at different scales. Our experiments with EFT show that scaling up fine-tuning tends to improve helpfulness, while scaling up pre-training tends to improve factuality. Further, we show that EFT enables test-time adjustment of competing behavioral factors like helpfulness and harmlessness without additional training. Finally, we find that a special case of emulated fine-tuning, which we call LM *up-scaling*, avoids resource-intensive fine-tuning of large pre-trained models by ensembling small fine-tuned models with large pre-trained models, essentially 'emulating' the result of fine-tuning the large pre-trained model. Up-scaling consistently improves helpfulness and factuality of widely used pre-trained models like Llama, Llama-2, and Falcon, without additional hyperparameters or training.", "type": "Poster", @@ -11606,7 +11606,7 @@ "id": 19090, "title": "Privileged Sensing Scaffolds Reinforcement Learning", "authors": [ - "Edward Hu", + "Edward S. Hu", "James Springer", "Oleh Rybkin", "Dinesh Jayaraman" @@ -11645,10 +11645,10 @@ "authors": [ "Yi Sui", "Tongzi Wu", - "Jesse Cresswell", + "Jesse C. Cresswell", "Ga Wu", "George Stein", - "Xiao Shi (Gary) Huang", + "Xiao Shi Huang", "Xiaochen Zhang", "Maksims Volkovs" ], @@ -11663,7 +11663,7 @@ }, { "id": 19087, - "title": "Dissecting Neural Network Robustness Proofs", + "title": "Interpreting Robustness Proofs of Deep Neural Networks", "authors": [ "Debangshu Banerjee", "Avaljot Singh", @@ -11723,7 +11723,7 @@ }, { "id": 19084, - "title": "Robotic Task Generalization via Hindsight Trajectory Sketches", + "title": "RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches", "authors": [ "Jiayuan Gu", "Sean Kirmani", @@ -11758,7 +11758,7 @@ "authors": [ "Robert Huben", "Hoagy Cunningham", - "Logan Smith", + "Logan Riggs Smith", "Aidan Ewart", "Lee Sharkey" ], @@ -11828,8 +11828,8 @@ "title": "LOQA: Learning with Opponent Q-Learning Awareness", "authors": [ "Milad Aghajohari", - "Juan Duque", - "Timotheus Cooijmans", + "Juan Agustin Duque", + "Tim Cooijmans", "Aaron Courville" ], "abstract": "In various real-world scenarios, interactions among agents often resemble the dynamics of general-sum games, where each agent strives to optimize its own utility. Despite the ubiquitous relevance of such settings, decentralized machine learning algorithms have struggled to find equilibria that maximize individual utility while preserving social welfare. In this paper we introduce Learning with Opponent Q-Learning Awareness (LOQA) , a novel reinforcement learning algorithm tailored to optimizing an agent's individual utility while fostering cooperation among adversaries in partially competitive environments. LOQA assumes that each agent samples actions proportionally to their action-value function Q. Experimental results demonstrate the effectiveness of LOQA at achieving state-of-the-art performance in benchmark scenarios such as the Iterated Prisoner's Dilemma and the Coin Game. LOQA achieves these outcomes with a significantly reduced computational footprint compared to previous works, making it a promising approach for practical multi-agent applications.", @@ -11849,9 +11849,9 @@ "Yang Liu", "Linxuan Xia", "Yuqi Lin", + "Wenxiao Wang", "Tu Zheng", "Zheng Yang", - "Wenxiao Wang", "Xiaohui Zhong", "Xiaobo Ren", "Xiaofei He" @@ -11869,8 +11869,8 @@ "id": 19075, "title": "Multimarginal Generative Modeling with Stochastic Interpolants", "authors": [ - "Michael Albergo", - "Nicholas Boffi", + "Michael Samuel Albergo", + "Nicholas Matthew Boffi", "Michael Lindsey", "Eric Vanden-Eijnden" ], @@ -11889,7 +11889,7 @@ "authors": [ "Ibrahim Alabdulmohsin", "Xiao Wang", - "Andreas Steiner", + "Andreas Peter Steiner", "Priya Goyal", "Alexander D'Amour", "Xiaohua Zhai" @@ -11945,7 +11945,7 @@ "title": "Gradual Optimization Learning for Conformational Energy Minimization", "authors": [ "Artem Tsypin", - "Leonid A. Ugadiarov", + "Leonid Anatolievich Ugadiarov", "Kuzma Khrabrov", "Alexander Telepov", "Egor Rumiantsev", @@ -11996,7 +11996,7 @@ "authors": [ "Kyle Vedder", "Neehar Peri", - "Nathaniel Chodosh", + "Nathaniel Eliot Chodosh", "Ishan Khatri", "ERIC EATON", "Dinesh Jayaraman", @@ -12042,7 +12042,7 @@ "Qing Li", "Yixin Zhu", "Yitao Liang", - "Yingnian Wu", + "Ying Nian Wu", "Song-Chun Zhu", "Siyuan Huang" ], @@ -12076,7 +12076,7 @@ }, { "id": 19058, - "title": "Alpagasus: Training a Better Alpaca Model with Fewer Data", + "title": "AlpaGasus: Training a Better Alpaca with Fewer Data", "authors": [ "Lichang Chen", "Shiyang Li", @@ -12137,7 +12137,7 @@ }, { "id": 19010, - "title": "A unique M-pattern for micro-expreesion spotting in long videos", + "title": "A unique M-pattern for micro-expression spotting in long videos", "authors": [ "Jinxuan Wang", "Shiting Xu", @@ -12161,7 +12161,7 @@ "Ignavier Ng", "Xiangchen Song", "Yujia Zheng", - "songyao jin", + "Songyao Jin", "Roberto Legaspi", "Peter Spirtes", "Kun Zhang" @@ -12254,8 +12254,8 @@ "title": "SOInter: A Novel Deep Energy-Based Interpretation Method for Explaining Structured Output Models", "authors": [ "S. Fatemeh Seyyedsalehi", - "Mahdieh Baghshah", - "Hamid Rabiee" + "Mahdieh Soleymani Baghshah", + "Hamid R. Rabiee" ], "abstract": "This paper proposes a novel interpretation technique to explain the behavior of structured output models, which simultaneously learn mappings between an input vector and a set of output variables. As a result of the complex relationships between the computational path of output variables in structured models, a feature may impact the output value via another feature. We focus on one of the outputs as the target and try to find the most important features adopted by the structured model to decide on the target in each locality of the input space. We consider an arbitrary structured output model available as a black-box and argue that considering correlations among output variables can improve explanation quality. The goal is to train a function as an interpreter for the target output variable over the input space. We introduce an energy-based training process for the interpreter function, which effectively considers the structural information incorporated into the model to be explained. The proposed method's effectiveness is confirmed using various simulated and real data sets.", "type": "Poster", @@ -12268,7 +12268,7 @@ }, { "id": 19047, - "title": "Direct Inversion: Boosting Diffusion-based Editing with 3 Lines of Code", + "title": "PnP Inversion: Boosting Diffusion-based Editing with 3 Lines of Code", "authors": [ "Xuan Ju", "Ailing Zeng", @@ -12339,7 +12339,7 @@ "Xinyi Wang", "Zhibo Jin", "Jason Xue", - "Flora Salim" + "Flora D. Salim" ], "abstract": "Deep Neural Networks (DNNs) have achieved state-of-the-art performance in various application scenarios. However, due to the real-world noise and human-added perturbations, the trustworthiness of DNNs has been a critical concern from the security perspective. Therefore, it is imperative to provide explainability for the decisions made by the non-linear and complex parameterized models. Given the diverse decision boundaries across various models and specific tasks, attribution methods are promising for this goal, yet its performance can be further improved. In this paper, for the first time, we present that the decision boundary exploration approaches of attribution are consistent with the process for transferable adversarial attacks. Utilizing this consistency, we introduce a novel attribution method via model parameter exploration. Furthermore, inspired by the capability of frequency exploration to investigate the model parameters, we provide enhanced explainability for DNN models by manipulating the input features based on frequency information to explore the decision boundaries of different models. The large-scale experiments demonstrate that our \\textbf{A}ttribution method for \\textbf{E}xplanation with model parameter e\\textbf{X}ploration (AttEXplore) outperforms other state-of-the-art interpretability methods. Moreover, by employing other transferable attack techniques, AttEXplore can explore potential variations in attribution outcomes. Our code is available at: https://anonymous.4open.science/r/AMPE-6C32/.", "type": "Poster", @@ -12358,7 +12358,7 @@ "Jaeseung Park", "Minkyu Kim", "Jaewoong Cho", - "Ernest K Ryu", + "Ernest K. Ryu", "Kangwook Lee" ], "abstract": "Classical clustering methods do not provide users with direct control of the clustering results, and the clustering results may not be consistent with the relevant criterion that a user has in mind. In this work, we present a new methodology for performing image clustering based on user-specified criteria in the form of text by leveraging modern Vision-Language Models and Large Language Models. We call our method Image Clustering Conditioned on Text Criteria (IC$|$TC), and it represents a different paradigm of image clustering. IC$|$TC requires a minimal and practical degree of human intervention and grants the user significant control over the clustering results in return. Our experiments show that IC$|$TC can effectively cluster images with various criteria, such as human action, physical location, or the person's mood, significantly outperforming baselines.", @@ -12379,7 +12379,7 @@ "Xiaozhuang Song", "Shun Zheng", "He Zhao", - "Dandan Guo", + "Dan dan Guo", "Yi Chang" ], "abstract": "Tabular data have been playing a mostly important role in diverse real-world fields, such as healthcare, engineering, finance, etc.With the recent success of deep learning, many tabular machine learning (ML) methods based on deep networks (e.g., Transformer, ResNet) have achieved competitive performance on tabular benchmarks. However, existing deep tabular ML methods suffer from the representation entanglement and localization, which largely hinders their prediction performance and leads to performance inconsistency on tabular tasks.To overcome these problems, we explore a novel direction of applying prototype learning for tabular ML and propose a prototype-based tabular representation learning framework, PTaRL, for tabular prediction tasks. The core idea of PTaRL is to construct prototype-based projection space (P-Space) and learn the disentangled representation around global data prototypes. Specifically, PTaRL mainly involves two stages: (i) Prototype Generating, that constructs global prototypes as the basis vectors of P-Space for representation, and (ii) Prototype Projecting, that projects the data samples into P-Space and keeps the core global data information via Optimal Transport. Then, to further acquire the disentangled representations, we constrain PTaRL with two strategies: (i) to diversify the coordinates towards global prototypes of different representations within P-Space, we bring up a diversifying constraint for representation calibration; (ii) to avoid prototype entanglement in P-Space, we introduce a matrix orthogonalization constraint to ensure the independence of global prototypes. Finally, we conduct extensive experiments in PTaRL coupled with state-of-the-art deep tabular ML models on various tabular benchmarks and the results have shown our consistent superiority.", @@ -12438,7 +12438,7 @@ "Yongchao Zhou", "Jimmy Ba", "Yann Dubois", - "Chris Maddison", + "Chris J. Maddison", "Tatsunori Hashimoto" ], "abstract": "Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks\u2014such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, setting up the environment for each test scenario manually, and finding risky cases. As tools and agents become more complex, the high cost of testing these agents will make it increasingly difficult to find high-stakes, long-tail risks. To address these challenges, we introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables scalable testing of LM agents against a diverse range of tools and scenarios. Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks. We test both the tool emulator and evaluator through human evaluation and find that 68.8% of failures identified with ToolEmu would be valid real-world agent failures. Using our curated initial benchmark consisting of 36 high-stakes toolkits and 144 test cases, we provide a quantitative risk analysis of current LM agents and identify numerous failures with potentially severe outcomes. Notably, even the safest LM agent exhibits such failures 23.9% of the time according to our evaluator, underscoring the need to develop safer LM agents for real-world deployment.", @@ -12475,7 +12475,7 @@ "Aditi Tuli", "Shubh Khanna", "Anna Goldie", - "Christopher Manning" + "Christopher D Manning" ], "abstract": "Retrieval-augmented language models can better adapt to changes in world state and incorporate long-tail knowledge. However, most existing methods retrieve only short contiguous chunks from a retrieval corpus, limiting holistic understanding of the overall document context. We introduce the novel approach of recursively embedding, clustering, and summarizing chunks of text, constructing a tree with differing levels of summarization from the bottom up. At inference time, our RAPTOR model retrieves from this tree, integrating information across lengthy documents at different levels of abstraction. Controlled experiments show that retrieval with recursive summaries offers significant improvements over traditional retrieval-augmented LMs on several tasks. On question-answering tasks that involve complex, multi-step reasoning, we show state-of-the-art results; for example, by coupling RAPTOR retrieval with the use of GPT-4, we can improve the best performance on the QuALITY benchmark by 20\\% in absolute accuracy.", "type": "Poster", @@ -12556,7 +12556,7 @@ "authors": [ "Liu Yang", "Kangwook Lee", - "Robert Nowak", + "Robert D Nowak", "Dimitris Papailiopoulos" ], "abstract": "Transformers have demonstrated effectiveness in in-context solving data-fitting problems from various (latent) models, as reported by Garg et al. (2022). However, the absence of an inherent iterative structure in the transformer architecture presents a challenge in emulating the iterative algorithms, which are commonly employed in traditional machine learning methods. To address this, we propose the utilization of looped transformer architecture and its associated training methodology, with the aim of incorporating iterative characteristics into the transformer architectures. Experimental results suggest that the looped transformer achieves performance comparable to the standard transformer in solving various data-fitting problems, while utilizing less than 10% of the parameter count.", @@ -12652,7 +12652,7 @@ "id": 18990, "title": "Diffusion Sampling with Momentum for Mitigating Divergence Artifacts", "authors": [ - "Suttisak Wisadwongsa", + "Suttisak Wizadwongsa", "Worameth Chinchuthakun", "Pramook Khungurn", "Amit Raj", @@ -12669,11 +12669,11 @@ }, { "id": 18989, - "title": "FairVLM: Mitigating Bias In Pre-Trained Vision-Language Models", + "title": "FairerCLIP: Debiasing CLIP's Zero-Shot Predictions using Functions in RKHSs", "authors": [ "Sepehr Dehdashtian", "Lan Wang", - "Vishnu Naresh Boddeti" + "Vishnu Boddeti" ], "abstract": "Large pre-trained vision-language models (VLMs) provide compact and general-purpose representations of text and images that are demonstrably effective across multiple downstream vision and language tasks. However, owing to the nature of their training process, these models have the potential to 1) propagate or amplify societal biases in the training data, and 2) learn to rely on spurious features. Thispaper proposes FairVLM, a general approach for making the zero-shot prediction of VLMs more fair and robust to spurious correlations. We formulate the problem of jointly debiasing VLMs\u2019 image and text representations in reproducing kernel Hilbert spaces (RKHSs), which affords multiple benefits: 1) Flexibility: Unlike existing approaches, which are specialized to either learn with or without ground-truth labels, FairVLM is adaptable to learning in both scenarios, 2) Ease of Optimization: FairVLM lends itself to an iterative optimization involving closed-form solvers, which leads to 4\u00d7-10\u00d7 faster training than the existing methods, 3) Sample Efficiency: Under sample-limited conditions, FairVLM significantly outperforms baselines when they fail entirely, and 4) Performance: Empirically, FairVLM achieves appreciable zero-shot accuracy gains on benchmark fairness and spurious correlation datasets over their respective baselines.", "type": "Poster", @@ -12686,7 +12686,7 @@ }, { "id": 18988, - "title": "Object-Centric Semantic Vector Quantization", + "title": "Structured World Modeling via Semantic Vector Quantization", "authors": [ "Yi-Fu Wu", "Minseung Lee", @@ -12706,10 +12706,10 @@ "title": "Set Learning for Accurate and Calibrated Models", "authors": [ "Lukas Muttenthaler", - "Robert A Vandermeulen", + "Robert A. Vandermeulen", "Qiuyi Zhang", "Thomas Unterthiner", - "Klaus R Muller" + "Klaus Robert Muller" ], "abstract": "Model overconfidence and poor calibration are common in machine learning and difficult to account for when applying standard empirical risk minimization. In this work, we propose a novel method to alleviate these problems that we call odd-$k$-out learning (OKO), which minimizes the cross-entropy error for sets rather than for single examples. This naturally allows the model to capture correlations across data examples and achieves both better accuracy and calibration, especially in limited training data and class-imbalanced regimes. Perhaps surprisingly, OKO often yields better calibration even when training with hard labels and dropping any additional calibration parameter tuning, such as temperature scaling. We demonstrate this in extensive experimental analyses and provide a mathematical theory to interpret our findings. We emphasize that OKO is a general framework that can be easily adapted to many settings and a trained model can be applied to single examples at inference time, without significant run-time overhead or architecture changes.", "type": "Poster", @@ -12745,7 +12745,7 @@ "Thong Thanh Nguyen", "Xiaobao Wu", "Xinshuai Dong", - "Cong-Duy Nguyen", + "Cong-Duy T Nguyen", "See-Kiong Ng", "Anh Tuan Luu" ], @@ -12779,7 +12779,7 @@ "title": "Novel Quadratic Constraints for Extending LipSDP beyond Slope-Restricted Activations", "authors": [ "Patricia Pauli", - "Aaron Havens", + "Aaron J Havens", "Alexandre Araujo", "Siddharth Garg", "Farshad Khorrami", @@ -12814,9 +12814,9 @@ "id": 18978, "title": "Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning", "authors": [ - "Johnathan Xie", + "Johnathan Wenjia Xie", "Yoonho Lee", - "Annie Chen", + "Annie S Chen", "Chelsea Finn" ], "abstract": "Self-supervised learning excels in learning representations from large amounts of unlabeled data, demonstrating success across multiple data modalities. Yet, extending self-supervised learning to new modalities is non-trivial because the specifics of existing methods are tailored to each domain, such as domain-specific augmentations which reflect the invariances in the target task. While masked modeling is promising as a domain-agnostic framework for self-supervised learning because it does not rely on input augmentations, its mask sampling procedure remains domain-specific. We present Self-guided Masked Autoencoders (SMA), a fully domain-agnostic masked modeling method. SMA trains an attention based model using a masked modeling objective, by learning masks to sample without any domain-specific assumptions. We evaluate SMA on three self-supervised learning benchmarks in protein biology, chemical property prediction, and particle physics. We find SMA is capable of learning representations without domain-specific knowledge and achieves state-of-the-art performance on these three benchmarks.", @@ -12857,7 +12857,7 @@ "Dongxiang Zhang", "Yangjun Wu", "Lilin Xu", - "Yuan Wang", + "Yuan Jessica Wang", "Xiongwei Han", "Xiaojin Fu", "Tao Zhong", @@ -12895,10 +12895,10 @@ }, { "id": 18975, - "title": "Energy-Based Concept Bottleneck Models", + "title": "Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations", "authors": [ "Xinyue Xu", - "Yi QIN", + "Yi Qin", "Lu Mi", "Hao Wang", "Xiaomeng Li" @@ -12923,8 +12923,8 @@ "Osbert Bastani", "Dinesh Jayaraman", "Yuke Zhu", - "Jim Fan", - "anima anandkumar" + "Linxi Fan", + "Anima Anandkumar" ], "abstract": "Large Language Models (LLMs) have excelled as high-level semantic planners for sequential decision-making tasks. However, harnessing them to learn complex low-level manipulation tasks, such as dexterous pen spinning, remains an open problem. We bridge this fundamental gap and present Eureka, a human-level reward design algorithm powered by LLMs. Eureka exploits the remarkable zero-shot generation, code-writing, and in-context improvement capabilities of state-of-the-art LLMs, such as GPT-4, to perform evolutionary optimization over reward code. The resulting rewards can then be used to acquire complex skills via reinforcement learning. Without any task-specific prompting or pre-defined reward templates, Eureka generates reward functions that outperform expert human-engineered rewards. In a diverse suite of 29 open-source RL environments that include 10 distinct robot morphologies, Eureka outperforms human experts on 83% of the tasks, leading to an average normalized improvement of 52%. The generality of Eureka also enables a new gradient-free in-context learning approach to reinforcement learning from human feedback (RLHF), readily incorporating human inputs to improve the quality and the safety of the generated rewards without model updating. Finally, using Eureka rewards in a curriculum learning setting, we demonstrate for the first time, a simulated Shadow Hand capable of performing pen spinning tricks, adeptly manipulating a pen in circles at rapid speed.", "type": "Poster", @@ -12937,7 +12937,7 @@ }, { "id": 18969, - "title": "PoMe: Fleet Learning via Policy Merging", + "title": "Robot Fleet Learning via Policy Merging", "authors": [ "Lirui Wang", "Kaiqing Zhang", @@ -12976,7 +12976,7 @@ "authors": [ "Hugo Lebeau", "Mohamed El Amine Seddik", - "Jos\u00e9 Henrique Goulart" + "Jos\u00e9 Henrique De Morais Goulart" ], "abstract": "We study the estimation of a planted signal hidden in a recently introduced nested matrix-tensor model, which is an extension of the classical spiked rank-one tensor model, motivated by multi-view clustering. Prior work has theoretically examined the performance of a tensor-based approach, which relies on finding a best rank-one approximation, a problem known to be computationally hard. A tractable alternative approach consists in computing instead the best rank-one (matrix) approximation of an unfolding of the observed tensor data, but its performance was hitherto unknown. We quantify here the performance gap between these two approaches, in particular by deriving the precise algorithmic threshold of the unfolding approach and demonstrating that it exhibits a BBP-type transition behavior. This work is therefore in line with recent contributions which deepen our understanding of why tensor-based methods surpass matrix-based methods in handling structured tensor data.", "type": "Poster", @@ -13011,7 +13011,7 @@ "authors": [ "Tsung-Wei Ke", "Sangwoo Mo", - "Stella Yu" + "Stella X. Yu" ], "abstract": "Image segmentation and recognition occur simultaneously, with recognition relying on the underlying segmentation to form a continuous visual grouping hierarchy. For example, the same object can be parsed into different part-to-whole structures, resulting in varying recognitions. Despite this, most prior works treated segmentation and recognition as separate tasks. In this paper, we aim to devise a learning framework that involves segmentation in the recognition process, utilizing hierarchical segmentation for recognition, which is learned by recognition. Specifically, we propose CAST, which realizes this concept through designs inspired by vision transformers, enabling concurrent segmentation and recognition with a single model. The core idea of CAST is to employ adaptive segment tokens that group the finest pixels into coarser segments, using the latest embedding to represent the entire image for recognition. Trained solely on image recognition objectives, CAST automatically discovers the hierarchy of segments. Our experiments demonstrate that CAST provides consistent hierarchical segmentation and recognition, which is impossible with state-of-the-art segmentation methods such as SAM. Additionally, CAST offers several advantages over the standard ViT, including improved semantic segmentation, computational efficiency, and object-centric attention.", "type": "Spotlight Poster", @@ -13027,7 +13027,7 @@ "title": "Provable Robust Watermarking for AI-Generated Text", "authors": [ "Xuandong Zhao", - "Prabhanjan Ananth", + "Prabhanjan Vijendra Ananth", "Lei Li", "Yu-Xiang Wang" ], @@ -13044,11 +13044,11 @@ }, { "id": 18963, - "title": "Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-Image Generation", + "title": "Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation", "authors": [ "Jaemin Cho", "Yushi Hu", - "Jason Baldridge", + "Jason Michael Baldridge", "Roopal Garg", "Peter Anderson", "Ranjay Krishna", @@ -13075,7 +13075,7 @@ "Vimal Thilak", "Arwen Bradley", "Preetum Nakkiran", - "Joshua Susskind", + "Joshua M. Susskind", "Etai Littwin" ], "abstract": "Pretrained language models are commonly aligned with human preferences and downstream tasks via reinforcement finetuning (RFT), which refers to maximizing a (possibly learned) reward function using policy gradient algorithms. This work identifies a fundamental optimization obstacle in RFT: we prove that the expected gradient for an input vanishes when its reward standard deviation under the model is small, even if the expected reward is far from optimal. Through experiments on an RFT benchmark and controlled environments, as well as a theoretical analysis, we then demonstrate that vanishing gradients due to small reward standard deviation are prevalent and detrimental, leading to extremely slow reward maximization. Lastly, we explore ways to overcome vanishing gradients in RFT. We find the common practice of an initial supervised finetuning (SFT) phase to be the most promising candidate, which sheds light on its importance in an RFT pipeline. Moreover, we show that a relatively small number of SFT optimization steps on as few as 1% of the input samples can suffice, indicating that the initial SFT phase need not be expensive in terms of compute and data labeling efforts. Overall, our results emphasize that being mindful for inputs whose expected gradient vanishes, as measured by the reward standard deviation, is crucial for successful execution of RFT.", @@ -13113,7 +13113,7 @@ "Xinyun Chen", "Swaroop Mishra", "Huaixiu Steven Zheng", - "Adams Yu", + "Adams Wei Yu", "Xinying Song", "Denny Zhou" ], @@ -13201,7 +13201,7 @@ }, { "id": 18951, - "title": "Bellman Optimal Step-size Straightening of Flow-Matching Models", + "title": "Bellman Optimal Stepsize Straightening of Flow-Matching Models", "authors": [ "Bao Nguyen", "Binh Nguyen", @@ -13271,8 +13271,8 @@ "id": 18948, "title": "Toward Optimal Policy Population Growth in Two-Player Zero-Sum Games", "authors": [ - "Stephen McAleer", - "John Banister Lanier", + "Stephen Marcus McAleer", + "JB Lanier", "Kevin A. Wang", "Pierre Baldi", "Tuomas Sandholm", @@ -13313,7 +13313,7 @@ "id": 18946, "title": "Rethinking Label Poisoning for GNNs: Pitfalls and Attacks", "authors": [ - "Vijay Chandra Lingam", + "Vijay Lingam", "Mohammad Sadegh Akhondzadeh", "Aleksandar Bojchevski" ], @@ -13441,14 +13441,14 @@ }, { "id": 18938, - "title": "The Update Equivalence Framework for Decision-Time Planning", + "title": "The Update-Equivalence Framework for Decision-Time Planning", "authors": [ "Samuel Sokota", "Gabriele Farina", - "David Wu", + "David J Wu", "Hengyuan Hu", "Kevin A. Wang", - "Zico Kolter", + "J Zico Kolter", "Noam Brown" ], "abstract": "The process of revising (or constructing) a policy immediately prior to execution---known as decision-time planning---is key to achieving superhuman performance in perfect-information games like chess and Go. A recent line of work has extended decision-time planning to more general imperfect-information games, leading to superhuman performance in poker. However, these methods require considering subgames whose sizes grow quickly in the amount of non-public information, making them unhelpful when the amount of non-public information is large. Motivated by this issue, we introduce an alternative framework for decision-time planning that is not based on subgames but rather on the notion of update equivalence. In this framework, decision-time planning algorithms are designed to replicate, in the limit, updates of global policy learners. Despite its conceptual simplicity, this approach had surprisingly been overlooked in the imperfect-information game literature. It enables us to introduce a new family of principled decision-time planning algorithms that do not rely on public information, opening the door to sound and effective decision-time planning in games with large amounts of non-public information. In experiments, members of this family produce comparable or superior results compared to state-of-the-art approaches in Hanabi and improve performance in 3x3 Abrupt Dark Hex and Phantom Tic-Tac-Toe.", @@ -13462,7 +13462,7 @@ }, { "id": 18937, - "title": "Towards Codable Text Watermarking for Large Language Models", + "title": "Towards Codable Watermarking for Injecting Multi-Bits Information to LLMs", "authors": [ "Lean Wang", "Wenkai Yang", @@ -13488,7 +13488,7 @@ "authors": [ "Heejun Lee", "Jina Kim", - "Jeff Willette", + "Jeffrey Willette", "Sung Ju Hwang" ], "abstract": "The transformer architecture has made breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, transformers struggle with long sequences due to the quadratic complexity of the attention operation, and previous research has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowledge from a teacher's attention matrix, and often require complete retraining from scratch. Furthermore, previous sparse and linear approaches may also lose interpretability if they do not produce full quadratic attention matrices. To address these challenges, we propose SEA: Sparse linear attention with an Estimated Attention mask. SEA estimates the attention matrix with linear complexity via kernel-based linear attention, then creates a sparse approximation to the full attention matrix with a top-k selection to perform a sparse attention operation. For language modeling tasks (Wikitext2), previous linear and sparse attention methods show a roughly two-fold worse perplexity scores over the quadratic OPT-125M baseline, while SEA achieves an even better perplexity than OPT-125M, using roughly half as much memory as OPT-125M. Moreover, SEA maintains an interpretable attention matrix and can utilize knowledge distillation to lower the complexity of existing pretrained transformers. We believe that our work will have a large practical impact, as it opens the possibility of running large transformers on resource-limited devices with less memory.", @@ -13524,7 +13524,7 @@ "Anant Dadu", "Nicholas Tustison", "Brian Avants", - "Michael Nalls", + "Mike Nalls", "Jimeng Sun", "Faraz Faghri" ], @@ -13562,7 +13562,7 @@ }, { "id": 18931, - "title": "Skill-Mix: a Flexible and Expandable Family of Evaluations for AI Models", + "title": "SKILL-MIX: a Flexible and Expandable Family of Evaluations for AI Models", "authors": [ "Dingli Yu", "Simran Kaur", @@ -13659,7 +13659,7 @@ }, { "id": 18922, - "title": "Improving Natural Language Understanding with Computation-Efficient Retrieval Augmentation", + "title": "ReFusion: Improving Natural Language Understanding with Computation-Efficient Retrieval Representation Fusion", "authors": [ "Shangyu Wu", "Ying Xiong", @@ -13779,7 +13779,7 @@ "Albin Madappally Jose", "Amit Jain", "Ludwig Schmidt", - "Alexander Toshev", + "Alexander T Toshev", "Vaishaal Shankar" ], "abstract": "Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem of learning a *data filtering network* (DFN) for this second step of filtering a large uncurated dataset. Our key finding is that the quality of a network for filtering is distinct from its performance on downstream tasks: for instance, a model that performs well on ImageNet can yield worse training sets than a model with low ImageNet accuracy that is trained on a small amount of high-quality data. Based on our insights, we construct new data filtering networks that induce state-of-the-art image-text datasets. Specifically, our best performing dataset DFN-5B enables us to train state-of-the-art models for their compute budgets: among other improvements on a variety of tasks, a ViT-H trained on our dataset achieves 83.0% zero-shot transfer accuracy on ImageNet, out-performing larger models trained on other datasets such as LAION-2B, DataComp-1B, or OpenAI\u2019s WIT. In order to facilitate further research in dataset design, we also release a new 2 billion example dataset DFN-2B and show that high performance data filtering networks can be trained from scratch using only publicly available data.", @@ -13827,7 +13827,7 @@ "id": 18910, "title": "At Which Training Stage Does Code Data Help LLMs Reasoning?", "authors": [ - "ma yingwei", + "YINGWEI MA", "Yue Liu", "Yue Yu", "Yuanliang Zhang", @@ -13873,12 +13873,12 @@ "Zhiwei Liu", "Yihao Feng", "Le Xue", - "Rithesh Murthy", + "Rithesh R N", "Zeyuan Chen", "Jianguo Zhang", "Devansh Arpit", "Ran Xu", - "Phil Mui", + "Phil L Mui", "Huan Wang", "Caiming Xiong", "Silvio Savarese" @@ -13951,7 +13951,7 @@ }, { "id": 18902, - "title": "$\\textbf{\\textit{M}}^\\textbf{\\textit{3}}$: Towards Robust Multi-Modal Reasoning via Model Selection", + "title": "Towards Robust Multi-Modal Reasoning via Model Selection", "authors": [ "Xiangyan Liu", "Rongxue LI", @@ -14013,7 +14013,7 @@ "authors": [ "Blake Bordelon", "Lorenzo Noci", - "Mufan Li", + "Mufan Bill Li", "Boris Hanin", "Cengiz Pehlevan" ], @@ -14072,7 +14072,7 @@ "Violetta Shevchenko", "Gil Avraham", "Hisham Husain", - "Anton Hengel" + "Anton van den Hengel" ], "abstract": "Synthesizing novel views for dynamic scenes from a collection of RGB inputs poses significant challenges due to the inherent under-constrained nature of the problem. To mitigate this ill-posedness, practitioners in the field of neural radiance fields (NeRF) often resort to the adoption of intricate geometric regularization techniques, including scene flow, depth estimation, or learned perceptual similarity. While these geometric cues have demonstrated their effectiveness, their incorporation leads to evaluation of computationally expensive off-the-shelf models, introducing substantial computational overhead into the pipeline. Moreover, seamlessly integrating such modules into diverse dynamic NeRF models can be a non-trivial task, hindering their utilization in an architecture-agnostic manner. In this paper, we propose a theoretically grounded, lightweight regularizer by treating the dynamics of a time-varying scene as a low-frequency change of a probability distribution of the light intensity. We constrain the dynamics of this distribution using optimal transport (OT) and provide error bounds under reasonable assumptions. Our regularization is learning-free, architecture agnostic, and can be implemented with just a few lines of code. Finally, we demonstrate the practical efficacy of our regularizer across state-of-the-art architectures.", "type": "Poster", @@ -14108,8 +14108,8 @@ "Yue Hu", "Yiqi Zhong", "Dequan Wang", - "Siheng Chen", - "Yanfeng Wang" + "Yanfeng Wang", + "Siheng Chen" ], "abstract": "Collaborative perception aims to mitigate the limitations of single-agent perception, such as occlusions, by facilitating data exchange among multiple agents. However, most current works consider a homogeneous scenario where all agents use identity sensors and perception models. In reality, heterogeneous agent types may continually emerge and inevitably face a domain gap when collaborating with existing agents. In this paper, we introduce a new open heterogeneous problem: how to accommodate continually emerging new heterogeneous agent types into collaborative perception, while ensuring high perception performance and low integration cost? To address this problem, we propose HEterogeneous ALliance (HEAL), a novel extensible collaborative perception framework. HEAL first establishes a unified feature space with initial agents via a novel multi-scale foreground-aware Pyramid Fusion network. When heterogeneous new agents emerge with previously unseen modalities or models, we align them to the established unified space with an innovative backward alignment. This step only involves individual training on the new agent type, thus presenting extremely low training costs and high extensibility. To enrich agents' data heterogeneity, we bring OPV2V-H, a new large-scale dataset with more diverse sensor types. Extensive experiments on OPV2V-H and DAIR-V2X datasets show that HEAL surpasses SOTA methods in performance while reducing the training parameters by 91.5\\% when integrating 3 new agent types. We further implement a comprehensive codebase at: https://github.com/yifanlu0227/HEAL", "type": "Poster", @@ -14161,7 +14161,7 @@ "id": 18885, "title": "Generative Pre-training for Speech with Flow Matching", "authors": [ - "Alexander Liu", + "Alexander H. Liu", "Matthew Le", "Apoorv Vyas", "Bowen Shi", @@ -14198,11 +14198,11 @@ "id": 18882, "title": "The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models", "authors": [ - "Raphael Avalos Martinez de Escobar", + "Rapha\u00ebl Avalos", "Florent Delgrange", "Ann Nowe", "Guillermo Perez", - "Diederik M. Roijers" + "Diederik M Roijers" ], "abstract": "Partially Observable Markov Decision Processes (POMDPs) are used to model environments where the full state cannot be perceived by an agent. As such the agent needs to reason taking into account the past observations and actions. However, simply remembering the full history is generally intractable due to the exponential growth in the history space. Maintaining a probability distribution that models the belief over what the true state is can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is often intractable. While SOTA algorithms use Recurrent Neural Networks to compress the observation-action history aiming to learn a sufficient statistic, they lack guarantees of success and can lead to sub-optimal policies. To overcome this, we propose the Wasserstein Belief Updater, an RL algorithm that learns a latent model of the POMDP and an approximation of the belief update. Our approach comes with theoretical guarantees on the quality of our approximation ensuring that our outputted beliefs allow for learning the optimal value function.", "type": "Poster", @@ -14274,7 +14274,7 @@ "Michael Shavlovsky", "Holakou Rahmanian", "Elisa Tardini", - "Kiran Thekumparampil", + "Kiran Koshy Thekumparampil", "Tesi Xiao", "Lexing Ying" ], @@ -14293,7 +14293,7 @@ "authors": [ "Arnav Gudibande", "Eric Wallace", - "Charlie Snell", + "Charlie Victor Snell", "Xinyang Geng", "Hao Liu", "Pieter Abbeel", @@ -14328,12 +14328,12 @@ }, { "id": 18872, - "title": "Who to imitate: Imitating desired behavior from divserse multi-agent datasets", + "title": "Select to Perfect: Imitating desired behavior from large multi-agent data", "authors": [ "Tim Franzmeyer", - "Jakob Foerster", "Edith Elkind", "Philip Torr", + "Jakob Nicolaus Foerster", "Joao F. Henriques" ], "abstract": "AI agents are commonly trained with large datasets of demonstrations of human behavior.However, not all behaviors are equally safe or desirable.Desired characteristics for an AI agent can be expressed by assigning desirability scores, which we assume are assigned to collective trajectories, but not to individual behaviors.For example, in a dataset of vehicle interactions, these scores might relate to the number of incidents that occurred. We first assess the effect of each individual agent's behavior on the collective desirability score, e.g., assessing how likely an agent is to cause incidents.This allows us to afterward only imitate agents with desired behavior, e.g., only imitating agents that are unlikely to cause incidents. To enable this, we propose the concept of an agent's \\textit{Exchange Value}, which quantifies an individual agent's contribution to the collective desirability score. This is expressed as the expected change in desirability score when substituting the agent for a randomly selected agent.We propose additional methods for estimating Exchange Values from real-world datasets, enabling us to learn aligned imitation policies that outperform relevant baselines.", @@ -14388,7 +14388,7 @@ "authors": [ "George Stoica", "Daniel Bolya", - "Jakob Bjorner", + "Jakob Brandt Bjorner", "Pratik Ramesh", "Taylor Hearn", "Judy Hoffman" @@ -14530,7 +14530,7 @@ "Qiwen Cui", "Zhihan Xiong", "Maryam Fazel", - "Simon Du" + "Simon Shaolei Du" ], "abstract": "We investigate learning the equilibria in non-stationary multi-agent systems and address the challenges that differentiate multi-agent learning from single-agent learning. Specifically, we focus on games with bandit feedback, where testing an equilibrium can result in substantial regret even when the gap to be tested is small, and the existence of multiple optimal solutions (equilibria) in stationary games poses extra challenges. To overcome these obstacles, we propose a versatile black-box approach applicable to a broad spectrum of problems, such as general-sum games, potential games, and Markov games, when equipped with appropriate learning and testing oracles for stationary environments. Our algorithms can achieve $\\widetilde{O}\\left(\\Delta^{1/4}T^{3/4}\\right)$ regret when the degree of nonstationarity, as measured by total variation $\\Delta$, is known, and $\\widetilde{O}\\left(\\Delta^{1/5}T^{4/5}\\right)$ regret when $\\Delta$ is unknown, where $T$ is the number of rounds. Meanwhile, our algorithm inherits the favorable dependence on number of agents from the oracles. As a side contribution that may be independent of interest, we show how to test for various types of equilibria by a black-box reduction to single-agent learning, which includes Nash equilibria, correlated equilibria, and coarse correlated equilibria.", "type": "Poster", @@ -14543,14 +14543,14 @@ }, { "id": 18860, - "title": "On input-dependence and recall in convolutional language models", + "title": "Zoology: Measuring and Improving Recall in Efficient Language Models", "authors": [ "Simran Arora", "Sabri Eyuboglu", "Aman Timalsina", "Isys Johnson", "Michael Poli", - "James Y Zou", + "James Zou", "Atri Rudra", "Christopher Re" ], @@ -14567,13 +14567,13 @@ "id": 18859, "title": "H-GAP: Humanoid Control with a Generalist Planner", "authors": [ - "Zhengyao Jiang", + "zhengyao jiang", "Yingchen Xu", "Nolan Wagener", "Yicheng Luo", "Michael Janner", "Edward Grefenstette", - "Tim Rocktaeschel", + "Tim Rockt\u00e4schel", "Yuandong Tian" ], "abstract": "Humanoid control is an important research challenge offering avenues for integration into human-centric infrastructures and enabling physics-driven humanoid animations.The daunting challenges in this field stem from the difficulty of optimizing in high-dimensional action spaces and the instability introduced by the bipedal morphology of humanoids. However, the extensive collection of human motion-captured data and the derived datasets of humanoid trajectories, such as MoCapAct, paves the way to tackle these challenges. In this context, we present Humanoid Generalist Autoencoding Planner (H-GAP), a state-action trajectory generative model trained on humanoid trajectories derived from human motion-captured data, capable of adeptly handling downstream control tasks with Model Predictive Control (MPC).For 56 degrees of freedom humanoid, we empirically demonstrate that H-GAP learns to represent and generate a wide range of motor behaviors. Further, without any learning from online interactions, it can also flexibly transfer these behaviours to solve novel downstream control tasks via planning. Notably, H-GAP excels established MPC baselines with access to the ground truth model, and is superior or comparable to offline RL methods trained for individual tasks.Finally, we do a series of empirical studies on the scaling properties of H-GAP, showing the potential for performance gains via additional data but not computing.", @@ -14593,7 +14593,7 @@ "Yiping Wang", "Zhenyu Zhang", "Beidi Chen", - "Simon Du" + "Simon Shaolei Du" ], "abstract": "We propose Joint MLP/Attention (JoMA) dynamics, a novel mathematical framework to understand the training procedure of multilayer Transformer architectures. This is achieved by integrating out the self-attention layer in Transformers, producing a modified dynamics of MLP layers only. JoMA removes unrealistic assumptions in previous analysis (e.g., lack of residual connection), and predicts that the attention first becomes sparse (to learn salient tokens), then dense (to learn less salient tokens) in the presence of nonlinear activations, while in the linear case, it is consistent with existing works. We leverage JoMA to qualitatively explains how tokens are combined to form hierarchies in multilayer Transformers, when the input tokens are generated by a latent hierarchical generative model. Experiments on models trained from real-world dataset (Wikitext2/Wikitext103) and various pre- trained models (OPT, Pythia) verify our theoretical findings.", "type": "Poster", @@ -14610,7 +14610,7 @@ "id": 18855, "title": "Delta-AI: Local objectives for amortized inference in sparse graphical models", "authors": [ - "Jean-Pierre Falet", + "Jean-Pierre Ren\u00e9 Falet", "Hae Beom Lee", "Nikolay Malkin", "Chen Sun", @@ -14636,7 +14636,7 @@ "Jongheon Jeong", "Minyong An", "Mohammad Ghavamzadeh", - "Krishnamurthy Dvijotham", + "Krishnamurthy Dj Dvijotham", "Jinwoo Shin", "Kimin Lee" ], @@ -14697,7 +14697,7 @@ "Yu-Xiao Guo", "Hao Pan", "Peng-Shuai Wang", - "\u7ae5\u6b23 TONG XIN", + "Xin Tong", "Yang Liu", "Qixing Huang" ], @@ -14735,7 +14735,7 @@ "Peijin Jia", "Bangjun Wang", "Li Chen", - "Kun Jiang", + "KUN JIANG", "Junchi Yan", "Hongyang Li" ], @@ -14813,7 +14813,7 @@ "title": "The Effectiveness of Random Forgetting for Robust Generalization", "authors": [ "Vijaya Raghavan T Ramkumar", - "Bahram Yoosefizonooz", + "Bahram Zonooz", "Elahe Arani" ], "abstract": "Deep neural networks are susceptible to adversarial attacks, which can compromise their performance and accuracy. Adversarial Training (AT) has emerged as a popular approach for protecting neural networks against such attacks. However, a key challenge of AT is robust overfitting, where the network's robust performance on test data deteriorates with further training, thus hindering generalization. Motivated by the concept of active forgetting in the brain, we introduce a novel learning paradigm called \"Forget to Mitigate Overfitting (FOMO)\". FOMO alternates between the forgetting phase, which randomly forgets a subset of weights and regulates the model's information through weight reinitialization, and the relearning phase, which emphasizes learning generalizable features. Our experiments on benchmark datasets and adversarial attacks show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy while improving the state-of-the-art robustness. Furthermore, FOMO provides a better trade-off between the standard and robust accuracy outperforming baseline adversarial methods. Finally, our framework is robust to AutoAttacks and increases generalization in many real-world scenarios.", @@ -14901,12 +14901,12 @@ "id": 18831, "title": "Discovering Temporally-Aware Reinforcement Learning Algorithms", "authors": [ - "Matthew T Jackson", + "Matthew Thomas Jackson", "Chris Lu", "Louis Kirsch", - "Robert Lange", + "Robert Tjarko Lange", "Shimon Whiteson", - "Jakob Foerster" + "Jakob Nicolaus Foerster" ], "abstract": "Recent advancements in meta-learning have enabled the automatic discovery of novel reinforcement learning algorithms parameterized by surrogate objective functions. To improve upon manually designed algorithms, the parameterization of this learned objective function must be expressive enough to represent novel principles of learning (instead of merely recovering already established ones) while still generalizing to a wide range of settings outside of its meta-training distribution. However, existing methods focus on discovering objective functions that, like many widely used objective functions in reinforcement learning, do not take into account the total number of steps allowed for training, or \u201ctraining horizon\u201d. In contrast, humans use a plethora of different learning objectives across the course of acquiring a new ability. For instance, students may alter their studying techniques based on the proximity to exam deadlines and their self-assessed capabilities. This paper contends that ignoring the optimization time horizon significantly restricts the expressive potential of discovered learning algorithms. We propose a simple augmentation to two existing objective discovery approaches that allows the discovered algorithm to dynamically update its objective function throughout the agent\u2019s training procedure, resulting in expressive schedules and increased generalization across different training horizons. In the process, we find that commonly used meta-gradient approaches fail to discover such adaptive objective functions while evolution strategies discover highly dynamic learning rules. We demonstrate the effectiveness of our approach on a wide range of tasks and analyze the resulting learned algorithms, which we find effectively balance exploration and exploitation by modifying the structure of their learning rules throughout the agent\u2019s lifetime.", "type": "Poster", @@ -14921,7 +14921,7 @@ "id": 18830, "title": "CARD: Channel Aligned Robust Blend Transformer for Time Series Forecasting", "authors": [ - "xue wang", + "Xue Wang", "Tian Zhou", "Qingsong Wen", "Jinyang Gao", @@ -15016,7 +15016,7 @@ "Angelica Chen", "Ravid Shwartz-Ziv", "Kyunghyun Cho", - "Matthew Leavitt", + "Matthew L Leavitt", "Naomi Saphra" ], "abstract": "Most interpretability research in NLP focuses on understanding the behavior and features of a fully trained model. However, certain insights into model behavior may only be accessible by observing the trajectory of the training process. We present a case study of syntax acquisition in masked language models (MLMs) that demonstrates how analyzing the evolution of interpretable artifacts throughout training deepens our understanding of emergent behavior. In particular, we study Syntactic Attention Structure (SAS), a naturally emerging property of MLMs wherein specific Transformer heads tend to focus on specific syntactic relations. We identify a brief window in pretraining when models abruptly acquire SAS, concurrent with a steep drop in loss. This breakthrough precipitates the subsequent acquisition of linguistic capabilities. We then examine the causal role of SAS by manipulating SAS during training, and demonstrate that SAS is necessary for the development of grammatical capabilities. We further find that SAS competes with other beneficial traits during training, and that briefly suppressing SAS improves model quality. These findings offer an interpretation of a real-world example of both simplicity bias and breakthrough training dynamics.", @@ -15168,7 +15168,7 @@ }, { "id": 18813, - "title": "Fast Ensembling with Diffusion Schr\\\"odinger Bridge", + "title": "Fast Ensembling with Diffusion Schr\u00f6dinger Bridge", "authors": [ "Hyunsu Kim", "Jongmin Yoon", @@ -15191,7 +15191,7 @@ "Jiayuan Mao", "Yilun Du", "Shao-Hua Sun", - "Joshua B Tenenbaum" + "Joshua B. Tenenbaum" ], "abstract": "In this work, we present an approach to construct a video-based robot policy capable of reliably executing diverse tasks across different robots and environments from few video demonstrations without using any action annotations. Our method leverages images as a task-agnostic representation, encoding both the state and action information, and text as a general representation for specifying robot goals. By synthesizing videos that \u201challucinate\u201d robot executing actions and in combination with dense correspondences between frames, our approach can infer the closed-formed action to execute to an environment without the need of any explicit action labels. This unique capability allows us to train the policy solely based on RGB videos and deploy learned policies to various robotic tasks. We demonstrate the efficacy of our approach in learning policies on table-top manipulation and navigation tasks. Additionally, we contribute an open-source framework for efficient video modeling, enabling the training of high-fidelity policy models with four GPUs within a single day.", "type": "Spotlight Poster", @@ -15211,7 +15211,7 @@ "Mark Hamilton", "Ayush Tewari", "Simon Stent", - "William Freeman", + "William T. Freeman", "Ruth Rosenholtz" ], "abstract": "Evaluating deep neural networks (DNNs) as models of human perception has given rich insights into both human visual processing and representational properties of DNNs. We extend this work by analyzing how well DNNs perform compared to humans when constrained by peripheral vision -- which limits human performance on a variety of tasks, but also benefits the visual system significantly. We evaluate this by (1) modifying the Texture Tiling Model (TTM), a well tested model of peripheral vision to be more flexibly used with DNNs, (2) generating a large dataset which we call COCO-Periph that contains images transformed to capture the information available in human peripheral vision, and (3) comparing DNNs to humans at peripheral object detection using a psychophysics experiment. Our results show that common DNNs underperform at object detection compared to humans when simulating peripheral vision with TTM. Training on COCO-Periph begins to reduce the gap between human and DNN performance and leads to small increases in corruption robustness, but DNNs still struggle to capture human-like sensitivity to peripheral clutter. Our work brings us closer to accurately modeling human vision, and paves the way for DNNs to mimic and sometimes benefit from properties of human visual processing.", @@ -15227,9 +15227,9 @@ "id": 18810, "title": "Experimental Design for Multi-Channel Imaging via Task-Driven Feature Selection", "authors": [ - "Stefano Blumberg", - "Paddy Slator", - "Daniel Alexander" + "Stefano B. Blumberg", + "Paddy J. Slator", + "Daniel C. Alexander" ], "abstract": "This paper presents a data-driven, task-specific paradigm for experimental design, to shorten acquisition time, reduce costs, and accelerate the deployment of imaging devices. Current standard approaches in experimental design focus on model-parameter estimation and require specification of a particular model, whereas in imaging, other tasks may drive the design. Furthermore, such approaches are often lead to intractable optimisation problems in real-world imaging applications. Here we put forward a new paradigm for experimental design that simultaneously optimizes the design (set of image channels) and trains a machine-learning model to execute a user-specified image-analysis task. The approach obtains data densely-sampled over the measurement space (many image channels) for a small number of acquisitions, then identifies a subset of channels of pre-specified size that best supports the task. We propose a method: TADRED for TAsk-DRiven experimental design in imaging, to identify the most informative channel-subset whilst simultaneously training a network to execute the task given the subset. Experiments demonstrate the potential of TADRED in diverse imaging applications: several clinically-relevant tasks in magnetic resonance imaging; and remote sensing and physiological applications of hyperspectral imaging. Results show substantial improvement over classical experimental design, two recent application-specific methods within the new paradigm we explore, and state-of-the-art approaches in supervised feature selection. We anticipate further applications of our approach; code (for reviewers) is available: \\cite{ouranonymouscode}.", "type": "Poster", @@ -15302,7 +15302,7 @@ "authors": [ "Jonathan Scott", "Hossein Zakerinia", - "Christop Lampert" + "Christoph H Lampert" ], "abstract": "We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects: 1) it produces more accurate models, especially in the low-data regime, and not only for clients present during its training phase, but also for any that may emerge in the future; 2) it reduces the amount of on-client computation and client-server communication by providing future clients with ready-to-use personalized models that require no additional finetuning or optimization; 3) it comes with theoretical guarantees that establish generalization from the observed clients to future ones. At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork. The embedding network is used to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork takes as input such descriptors and outputs the parameters of fully personalized client models. In combination, both networks constitute a learning algorithm that achieves state-of-the-art performance in several personalized federated learning benchmarks.", "type": "Poster", @@ -15317,7 +15317,7 @@ "id": 18805, "title": "Causal Structure Recovery with Latent Variables under Milder Distributional and Graphical Assumptions", "authors": [ - "Xiuchuan Li", + "Xiu-Chuan Li", "Kun Zhang", "Tongliang Liu" ], @@ -15334,7 +15334,7 @@ "id": 18804, "title": "Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization", "authors": [ - "Ravi Srinivasan", + "Ravi Francesco Srinivasan", "Francesca Mignacco", "Martino Sorbaro", "Maria Refinetti", @@ -15462,7 +15462,7 @@ "title": "MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models", "authors": [ "Longhui Yu", - "Weisen JIANG", + "Weisen Jiang", "Han Shi", "Jincheng YU", "Zhengying Liu", @@ -15487,7 +15487,7 @@ "authors": [ "Yichao Shen", "Zigang Geng", - "YUHUI YUAN", + "Yuhui Yuan", "Yutong Lin", "Ze Liu", "Chunyu Wang", @@ -15508,7 +15508,7 @@ "id": 18793, "title": "Reclaiming the Source of Programmatic Policies: Programmatic versus Latent Spaces", "authors": [ - "Tales Carvalho", + "Tales Henrique Carvalho", "Kenneth Tjhia", "Levi Lelis" ], @@ -15581,12 +15581,12 @@ }, { "id": 18784, - "title": "Text-driven Prompt Generation for Vision-Language Models in Federated Learning", + "title": "Federated Text-driven Prompt Generation for Vision-Language Models", "authors": [ "Chen Qiu", "Xingyu Li", "Chaithanya Kumar Mummadi", - "Madan Ganesh", + "Madan Ravi Ganesh", "Zhenzhen Li", "Lu Peng", "Wan-Yi Lin" @@ -15607,8 +15607,8 @@ "Rui Ye", "Yaxin Du", "Zhenyang Ni", - "Siheng Chen", - "Yanfeng Wang" + "Yanfeng Wang", + "Siheng Chen" ], "abstract": "In federated learning (FL), data heterogeneity is one key bottleneck that causes model divergence and limits performance. Addressing this, existing methods often regard data heterogeneity as an inherent property and propose to mitigate its adverse effects by correcting models. In this paper, we seek to break this inherent property by generating data to complement the original dataset to fundamentally mitigate heterogeneity level. As a novel attempt from the perspective of data, we propose federated learning with consensus-oriented generation (FedCOG). FedCOG consists of two key components at the client side: complementary data generation, which generates data extracted from the shared global model to complement the original dataset, and knowledge-distillation-based model training, which distills knowledge from global model to local model based on the generated data to mitigate over-fitting the original heterogeneous dataset.FedCOG has two critical advantages: 1) it can be a plug-and-play module to further improve the performance of most existing FL methods, and 2) it is naturally compatible with standard FL protocols such as Secure Aggregation since it makes no modification in communication process.Extensive experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods and has the plug-and-play property.", "type": "Poster", @@ -15624,7 +15624,7 @@ "title": "Towards Imitation Learning to Branch for MIP: A Hybrid Reinforcement Learning based Sample Augmentation Approach", "authors": [ "Changwen Zhang", - "wenli ouyang", + "Wenli Ouyang", "Hao Yuan", "Liming Gong", "Yong Sun", @@ -15693,7 +15693,7 @@ "authors": [ "Hanqi Zhou", "Robert Bamler", - "Charley Wu", + "Charley M Wu", "\u00c1lvaro Tejero-Cantero" ], "abstract": "Intelligent tutoring systems optimize the selection and timing of learning materials to enhance understanding and long-term retention. This requires estimates of both the learner's progress (\"knowledge tracing\"; KT), and the prerequisite structure of the learning domain (\"knowledge mapping\"). While recent deep learning models achieve high KT accuracy, they do so at the expense of the interpretability of psychologically-inspired models. In this work, we present a solution to this trade-off. PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics, thus achieving interpretability by design. Moreover, by using scalable Bayesian inference, PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and interaction data. Evaluated on three datasets from online learning platforms, PSI-KT achieves superior multi-step **p**redictive accuracy and **s**calable inference in continual-learning settings, all while providing **i**nterpretable representations of learner-specific traits and the prerequisite structure of knowledge that causally supports learning. In sum, predictive, scalable and interpretable knowledge tracing with solid knowledge mapping lays a key foundation for effective personalized learning to make education accessible to a broad, global audience.", @@ -15707,11 +15707,11 @@ }, { "id": 18777, - "title": "MOTOR: A Time-To-Event Foundation Model For Structured Medical Records", + "title": "MOTOR: A Time-to-Event Foundation Model For Structured Medical Records", "authors": [ "Ethan Steinberg", + "Jason Alan Fries", "Yizhe Xu", - "Jason Fries", "Nigam Shah" ], "abstract": "We present a self-supervised, time-to-event (TTE) foundation model called MOTOR (Many Outcome Time Oriented Representations) which is pretrained on timestamped sequences of events in electronic health records (EHR) and health insurance claims. TTE models are used for estimating the probability distribution of the time until a specific event occurs, which is an important task in medical settings. TTE models provide many advantages over classification using fixed time horizons, including naturally handling censored observations, but are challenging to train with limited labeled data. MOTOR addresses this challenge by pretraining on up to 55M patient records (9B clinical events). We evaluate MOTOR's transfer learning performance on 19 tasks, across 3 patient databases (a private EHR system, MIMIC-IV, and Merative claims data). Task-specific models adapted from MOTOR improve time-dependent C statistics by 4.6\\% over state-of-the-art, improve label efficiency by up to 95\\% ,and are more robust to temporal distributional shifts. We further evaluate cross-site portability by adapting our MOTOR foundation model for six prediction tasks on the MIMIC-IV dataset, where it outperforms all baselines. MOTOR is the first foundation model for medical TTE predictions and we release a 143M parameter pretrained model for research use at [redacted URL].", @@ -15725,11 +15725,11 @@ }, { "id": 18775, - "title": "ELoRA: Efficient Low-Rank Adaptation with Random Matrices", + "title": "VeRA: Vector-based Random Matrix Adaptation", "authors": [ - "Dawid Kopiczko", + "Dawid Jan Kopiczko", "Tijmen Blankevoort", - "Yuki Asano" + "Yuki M Asano" ], "abstract": "It is becoming common practice for natural language processing to finetune pretrained language models for several downstream tasks at the same time. In practice, one might see several use cases based on the same model running simultaneously. Yet, this practice comes with considerable storage requirements, an issue that becomes particularly acute when scaling to large models or deploying numerous per-user or per-task adapted models. Although parameter-efficient finetuning methods such as LoRA exist, they do not fully mitigate this storage challenge. To this end, we introduce Efficient Low-Rank Adaptation with Random Matrices (ELoRA), which takes parameter efficiency to the extreme. By freezing a single pair of random low-rank matrices, shared across all layers, and using small layer-wise trainable scaling vectors, ELoRA achieves a 10x reduction in trainable parameters compared to LoRA without compromising performance levels. We demonstrate the effectiveness of the method on the GLUE benchmark and analyze its parameter-performance trade-off. Finally, using the Llama2 7B model, we show that ELoRA can also be used for instruction-tuning with merely 1.4M parameters.", "type": "Poster", @@ -15744,10 +15744,10 @@ "id": 18773, "title": "Fantastic Generalization Measures are Nowhere to be Found", "authors": [ + "Michael Gastpar", "Ido Nachum", "Jonathan Shafer", - "Thomas Weinberger", - "Michael Gastpar" + "Thomas Weinberger" ], "abstract": "Numerous generalization bounds have been proposed in the literature as potential explanations for the ability of neural networks to generalize in the overparameterized setting. However, none of these bounds are tight. For instance, in their paper \u201cFantastic Generalization Measures and Where to Find Them\u201d, Jiang et al. (2020) examine more than a dozen generalization bounds, and show empirically that none of them imply guarantees that can explain the remarkable performance of neural networks. This raises the question of whether tight generalization bounds are at all possible. We consider two types of generalization bounds common in the literature: (1) bounds that depend on the training set and the output of the learning algorithm. There are multiple bounds of this type in the literature (e.g., norm- and margin-based bounds), but we prove mathematically that no such bound can be uniformly tight in the overparameterized setting; (2) bounds that depend on the training set and on the learning algorithm (e.g., stability bounds). For these bounds, we show a trade-off between the algorithm's performance and the bound's tightness. Namely, under mild assumptions, if the algorithm achieves good accuracy in the overparameterized setting, then no generalization bound can be tight for it. We conclude that generalization bounds in the overparameterized setting cannot be tight without suitable assumptions on the population distribution.", "type": "Poster", @@ -15845,7 +15845,7 @@ "title": "Lagrangian Flow Networks for Conservation Laws", "authors": [ "Fabricio Arend Torres", - "Marcello Negri", + "Marcello Massimo Negri", "Marco Inversi", "Jonathan Aellen", "Volker Roth" @@ -15871,7 +15871,7 @@ "Michael Foshey", "Benjamin Eckart", "Jan Kautz", - "Joshua B Tenenbaum", + "Joshua B. Tenenbaum", "Antonio Torralba", "Wojciech Matusik" ], @@ -15910,8 +15910,8 @@ "title": "InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists", "authors": [ "Yulu Gan", - "Sung Woo Park", - "Alexander Schubert", + "Sungwoo Park", + "Alexander Marcel Schubert", "Anthony Philippakis", "Ahmed Alaa" ], @@ -15972,7 +15972,7 @@ }, { "id": 18760, - "title": "Masked Audio Generative Modeling", + "title": "Masked Audio Generation using a Single Non-Autoregressive Transformer", "authors": [ "Alon Ziv", "Itai Gat", @@ -16021,7 +16021,7 @@ "Yu Li", "Lei Zhang", "Jian Zhang", - "Yuan Li" + "Li Yuan" ], "abstract": "Recent text-to-3D generation methods achieve impressive 3D content creation capacity thanks to the advances in image diffusion models and optimizing strategies. However, current methods struggle to generate correct 3D content for a complex prompt in semantics, i.e., a prompt describing multiple interacted objects binding with different attributes. In this work, we propose a general framework named Progressive3D, which decomposes the entire generation into a series of locally progressive editing steps to create precise 3D content for complex prompts, and we constrain the content change to only occur in regions determined by user-defined region prompts in each editing step. Furthermore, we propose an overlapped semantic component suppression technique to encourage the optimization process to focus more on the semantic differences between prompts. Extensive experiments demonstrate that the proposed Progressive3D framework generates precise 3D content for prompts with complex semantics through progressive editing steps and is general for various text-to-3D methods driven by different 3D representations.", "type": "Poster", @@ -16056,7 +16056,7 @@ "title": "Leveraging Optimization for Adaptive Attacks on Image Watermarks", "authors": [ "Nils Lukas", - "Abdelrahman Ahmed", + "Abdulrahman Diaa", "Lucas Fenaux", "Florian Kerschbaum" ], @@ -16071,7 +16071,7 @@ }, { "id": 18754, - "title": "Biased Temporal Convolution Graph Network for Time Series Forecasting with Missing Values.", + "title": "Biased Temporal Convolution Graph Network for Time Series Forecasting with Missing Values", "authors": [ "Xiaodan Chen", "Xiucheng Li", @@ -16190,7 +16190,7 @@ "authors": [ "Dinghuai Zhang", "Ricky T. Q. Chen", - "Chenghao Liu", + "Cheng-Hao Liu", "Aaron Courville", "Yoshua Bengio" ], @@ -16223,7 +16223,7 @@ "title": "Headless Language Models: Learning without Predicting with Contrastive Weight Tying", "authors": [ "Nathan Godey", - "\u00c9ric Clergerie", + "\u00c9ric Villemonte de la Clergerie", "Beno\u00eet Sagot" ], "abstract": "Self-supervised pre-training of language models usually consists in predicting probability distributions over extensive token vocabularies. In this study, we propose an innovative method that shifts away from probability prediction and instead focuses on reconstructing input embeddings in a contrastive fashion via Constrastive Weight Tying (CWT). We apply this approach to pretrain Headless Language Models in both monolingual and multilingual contexts. Our method offers practical advantages, substantially reducing training computational requirements by up to 20 times, while simultaneously enhancing downstream performance and data efficiency. We observe a significant +1.6 GLUE score increase and a notable +2.7 LAMBADA accuracy improvement compared to classical LMs within similar compute budgets.", @@ -16257,10 +16257,10 @@ "id": 18742, "title": "CoT3DRef: Chain-of-Thoughts Data-Efficient 3D Visual Grounding", "authors": [ - "eslam Abdelrahman", + "Eslam Mohamed BAKR", "Mohamed Ayman Mohamed", "Mahmoud Ahmed", - "Habib", + "Habib Slim", "Mohamed Elhoseiny" ], "abstract": "3D visual grounding is the ability to localize objects in 3D scenes conditioned onan input utterance. Most existing methods devote the referring head to localize thereferred object directly. However, this approach will fail in complex scenarios andnot illustrate how and why the network reaches the final decision. In this paper,we address this question \u201cCan we design an interpretable 3D visual groundingframework that has the potential to mimic the human perception system?\u201d. To thisend, we formulate the 3D visual grounding problem as a sequence-to-sequence(Seq2Seq) task by first predicting a chain of anchors and then utilizing them to pre-dict the final target. Following the chain of thoughts approach enables us to decom-pose the referring task into interpretable intermediate steps, which in turn, booststhe performance and makes our framework extremely data-efficient. Interpretabil-ity not only improves the overall performance but also helps us identify failurecases. Moreover, our proposed framework can be easily integrated into any existingarchitecture. We validate our approach through comprehensive experiments on theNr3D and Sr3D benchmarks and show consistent performance gains compared toexisting methods without requiring any manually annotated data. Furthermore, ourproposed framework, dubbed CoT3DRef, is significantly data-efficient, whereaswhen trained only on 10% of the data, we match the SOTA performance that trainedon the entire data. The code is available at https://cot3dref.github.io/.", @@ -16277,7 +16277,7 @@ "title": "$\\infty$-Diff: Infinite Resolution Diffusion with Subsampled Mollified States", "authors": [ "Sam Bond-Taylor", - "Chris G Willcocks" + "Chris G. Willcocks" ], "abstract": "We introduce $\\infty$-Diff, a generative diffusion model defined in an infinite-dimensional Hilbert space that allows infinite resolution data to be modelled. By randomly sampling subsets of coordinates during training and learning to denoise the content at those coordinates, a continuous function is learned that allows sampling at arbitrary resolutions. Prior infinite-dimensional generative models use point-wise functions that require latent compression for global context. In contrast, we propose using non-local integral operators to map between Hilbert spaces, allowing spatial information aggregation; to facilitate this, we design a powerful and efficient multi-scale architecture that operates directly on raw sparse coordinates. Training on high-resolution datasets we demonstrate that high-quality diffusion models can be learned with even $8\\times$ subsampling rates, enabling substantial improvements in run-time and memory requirements, achieving significantly higher sample quality as evidenced by lower FID scores, while also being able to effectively scale to higher resolutions than the training data while retaining detail.", "type": "Poster", @@ -16349,10 +16349,10 @@ "id": 18736, "title": "More is Better: when Infinite Overparameterization is Optimal and Overfitting is Obligatory", "authors": [ - "James Simon", + "James B Simon", "Dhruva Karkada", "Nikhil Ghosh", - "Misha Belkin" + "Mikhail Belkin" ], "abstract": "In our era of enormous neural networks, empirical progress has been driven by the philosophy that *more is better.*Recent deep learning practice has found repeatedly that larger model size, more data, and more computation (resulting in lower training loss) optimizing to near-interpolation improves performance. In this paper, we give theoretical backing to these empirical observations by showing that these three properties hold in random feature (RF) regression, a class of models equivalent to shallow networks with only the last layer trained.Concretely, we first show that the test risk of RF regression decreases monotonically with both the number of features and samples, provided the ridge penalty is tuned optimally. In particular, this implies that infinite width RF architectures are preferable to those of any finite width. We then proceed to demonstrate that, for a large class of tasks characterized by powerlaw eigenstructure, training to near-zero training loss is *obligatory:* near-optimal performance can *only* be achieved when the training error is much smaller than the test error. Grounding our theory in real-world data, we find empirically that standard computer vision tasks with convolutional neural kernels clearly fall into this class. Taken together, our results tell a simple, testable story of the benefits of overparameterization and overfitting in random feature models.", "type": "Poster", @@ -16386,7 +16386,7 @@ "title": "ALAM: Averaged Low-Precision Activation for Memory-Efficient Training of Transformer Models", "authors": [ "Sunghyeon Woo", - "SunWoo Lee", + "Sunwoo Lee", "Dongsuk Jeon" ], "abstract": "One of the key challenges in deep neural network training is the substantial amount of GPU memory required to store activations obtained in the forward pass. Various Activation-Compressed Training (ACT) schemes have been proposed to mitigate this issue; however, it is challenging to adopt those approaches in recent transformer-based large language models (LLMs), which experience significant performance drops when the activations are deeply compressed during training. In this paper, we introduce ALAM, a novel ACT framework that utilizes average quantization and a lightweight sensitivity calculation scheme, enabling large memory saving in LLMs while maintaining training performance. We first demonstrate that compressing activations into their group average values minimizes the gradient variance. Employing this property, we propose Average Quantization which provides high-quality deeply compressed activations with an effective precision of less than 1 bit and improved flexibility of precision allocation. In addition, we present a cost-effective yet accurate sensitivity calculation algorithm that solely relies on the L2 norm of parameter gradients, substantially reducing memory overhead due to sensitivity calculation. In experiments, the ALAM framework significantly reduces activation memory without compromising accuracy, achieving up to a 12.5$\\times$ compression rate in LLMs.", @@ -16403,7 +16403,7 @@ "title": "Estimating Conditional Mutual Information for Dynamic Feature Selection", "authors": [ "Soham Gadgil", - "Ian Covert", + "Ian Connick Covert", "Su-In Lee" ], "abstract": "Dynamic feature selection, where we sequentially query features to make accurate predictions with a minimal budget, is a promising paradigm to reduce feature acquisition costs and provide transparency into the prediction process. The problem is challenging, however, as it requires both making predictions with arbitrary feature sets and learning a policy to identify the most valuable selections. Here, we take an information-theoretic perspective and prioritize features based on their mutual information with the response variable. The main challenge is implementing this policy, and we design a new approach that estimates the mutual information in a discriminative rather than a generative fashion. Building on our learning approach, we introduce several further improvements: allowing variable feature budgets across samples, enabling non-uniform costs between features, incorporating prior information, and exploring modern architectures to handle partial input information. We find that our method provides consistent gains over recent state-of-the-art methods across a variety of datasets.", @@ -16439,7 +16439,7 @@ "title": "DiLu: A Knowledge-Driven Approach to Autonomous Driving with Large Language Models", "authors": [ "Licheng Wen", - "DAOCHENG FU", + "Daocheng Fu", "Xin Li", "Xinyu Cai", "Tao MA", @@ -16466,7 +16466,7 @@ "Tianle Li", "Kai Zhang", "Yujie Lu", - "XINGYU FU", + "Xingyu Fu", "Wenwen Zhuang", "Wenhu Chen" ], @@ -16525,7 +16525,7 @@ "title": "H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields", "authors": [ "Minyoung Park", - "MIRAE DO", + "Mirae Do", "Yeon Jae Shin", "Jaeseok Yoo", "Jongkwang Hong", @@ -16559,7 +16559,7 @@ }, { "id": 18718, - "title": "BRUSLEATTACK: QUERY-EFFICIENT SCORE-BASED SPARSE ADVERSARIAL ATTACK", + "title": "BRUSLEATTACK: A QUERY-EFFICIENT SCORE- BASED BLACK-BOX SPARSE ADVERSARIAL ATTACK", "authors": [ "Quoc Viet Vo", "Ehsan Abbasnejad", @@ -16596,7 +16596,7 @@ }, { "id": 18716, - "title": "RTFS-Net: Recurrent time-frequency modelling for efficient audio-visual speech separation", + "title": "RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation", "authors": [ "Samuel Pegg", "Kai Li", @@ -16634,7 +16634,7 @@ }, { "id": 18714, - "title": "Efficient Algorithms for the CCA Family: Unconstrained Objectives with Unbiased Gradients", + "title": "Unconstrained Stochastic CCA: Unifying Multiview and Self-Supervised Learning", "authors": [ "James Chapman", "Lennie Wells", @@ -16659,9 +16659,9 @@ "Yan Wang", "Hongyan Hao", "Fan Zhou", - "caigao jiang", + "Caigao JIANG", "Chen Pan", - "james zhang", + "James Y. Zhang", "Qingsong Wen", "JUN ZHOU", "Hongyuan Mei" @@ -16719,7 +16719,7 @@ "authors": [ "Ali Shahin Shamsabadi", "Gefei Tan", - "Tudor Cebere", + "Tudor Ioan Cebere", "Aur\u00e9lien Bellet", "Hamed Haddadi", "Nicolas Papernot", @@ -16803,7 +16803,7 @@ }, { "id": 18702, - "title": "From Matching to Mixing: A Graph Interpolation Approach for SAT Instance Generation", + "title": "MixSATGEN: Learning Graph Mixing for SAT Instance Generation", "authors": [ "Xinyan Chen", "Yang Li", @@ -16826,7 +16826,7 @@ "Zecheng Wang", "Che Wang", "Zixuan Dong", - "Keith Ross" + "Keith W. Ross" ], "abstract": "Recently, it has been shown that for offline deep reinforcement learning (DRL), pre-training Decision Transformer with a large language corpus can improve downstream performance (Reid et al., 2022). A natural question to ask is whether this performance gain can only be achieved with language pre-training, or can be achieved with simpler pre-training schemes which do not involve language. In this paper, we first show that language is not essential for improved performance, and indeed pre-training with synthetic IID data for a small number of updates can match the performance gains from pre-training with a large language corpus; moreover, pre-training with data generated by a one-step Markov chain can further improve the performance. Inspired by these experimental results, we then consider pre-training Conservative Q-Learning (CQL), a popular offline DRL algorithm, which is Q-learning-based and typically employs a Multi-Layer Perceptron (MLP) backbone. Surprisingly, pre-training with simple synthetic data for a small number of updates can also improve CQL, providing consistent performance improvement on D4RL Gym locomotion datasets. The results of this paper not only illustrate the importance of pre-training for offline DRL but also show that the pre-training data can be synthetic and generated with remarkably simple mechanisms.", "type": "Poster", @@ -16839,7 +16839,7 @@ }, { "id": 18699, - "title": "Cross$Q$: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity", + "title": "CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity", "authors": [ "Aditya Bhatt", "Daniel Palenicek", @@ -16864,10 +16864,10 @@ "authors": [ "Nima Shoghi", "Adeesh Kolluru", - "John Kitchin", - "Zachary Ulissi", - "Larry Zitnick", - "Brandon Wood" + "John R. Kitchin", + "Zachary Ward Ulissi", + "C. Lawrence Zitnick", + "Brandon M Wood" ], "abstract": "The role of machine learning in computing atomic properties is expanding rapidly for a wide range of applications from healthcare to climate change. One important ingredient that has enabled this development is the creation of large and diverse molecular datasets. Given the extreme computational cost of these datasets, an important question moving forward is: Can we limit the need for exhaustive large dataset creation by pre-training a foundation style model over multiple chemical domains to generate transferable atomic representations for downstream fine-tuning tasks? Generalization across the entire molecular space is challenging due to the range and complexity of atomic interactions that exist. In this paper, we present Joint Multi-domain Pre-training (JMP), a robust supervised pre-training strategy that utilizes data from multiple chemical domains, $\\sim$120 million examples in total. We demonstrate state-of-the-art results across many targets of the rMD17, QM9, MatBench, QMOF, SPICE, and MD22 datasets. Finally, we conduct ablations to study the impact of different components of JMP on downstream performance.", "type": "Poster", @@ -16883,7 +16883,7 @@ "title": "Deep Geodesic Canonical Correlation Analysis for Covariance-Based Neuroimaging Data", "authors": [ "Ce Ju", - "Reinmar Kobler", + "Reinmar J Kobler", "Liyao Tang", "Cuntai Guan", "Motoaki Kawanabe" @@ -16951,12 +16951,12 @@ }, { "id": 18689, - "title": "Quadratic models for understanding neural network dynamics", + "title": "Quadratic models for understanding catapult dynamics of neural networks", "authors": [ "Libin Zhu", "Chaoyue Liu", "Adityanarayanan Radhakrishnan", - "Misha Belkin" + "Mikhail Belkin" ], "abstract": "While neural networks can be approximated by linear models as their width increases, certain properties of wide neural networks cannot be captured by linear models. In this work we show that recently proposed Neural Quadratic Models can exhibit the \"catapult phase\" Lewkowycz et al. (2020) that arises when training such models with large learning rates. We then empirically show that the behaviour of quadratic models parallels that of neural networks in generalization, especially in the catapult phase regime. Our analysis further demonstrates that quadratic models are an effective tool for analysis of neural networks.", "type": "Poster", @@ -16993,7 +16993,7 @@ "Mingjie Sun", "Zhuang Liu", "Anna Bair", - "Zico Kolter" + "J Zico Kolter" ], "abstract": "As their size increases, Large Languages Models (LLMs) are natural candidates for network pruning methods: approaches that drop a subset of network weights while striving to preserve performance. Existing methods, however, require either retraining, which is rarely affordable for billion-scale LLMs, or solving a weight reconstruction problem reliant on second-order information, which may also becomputationally expensive. In this paper, we introduce a novel, straightforward yet effective pruning method, termed Wanda (Pruning by Weights and activations), designed to induce sparsity in pretrained LLMs. Motivated by the recent observation of emergent large magnitude features in LLMs, our approach prunes weights with the smallest magnitudes multiplied by the corresponding input activations, ona per-output basis. Notably, Wanda requires no retraining or weight update, and the pruned LLM can be used as is. We conduct a thorough evaluation of our method on LLaMA and LLaMA-2 across various language benchmarks. Wanda significantly outperforms the established baseline of magnitude pruning and performs competitively against recent methods involving intensive weight updates.", "type": "Poster", @@ -17011,7 +17011,7 @@ "title": "SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression", "authors": [ "Tim Dettmers", - "Ruslan Svirschevski", + "Ruslan A. Svirschevski", "Vage Egiazarian", "Denis Kuznedelev", "Elias Frantar", @@ -17109,7 +17109,7 @@ "Zhanke Zhou", "Yongqi Zhang", "Jiangchao Yao", - "Quanming Yao", + "quanming yao", "Bo Han" ], "abstract": "To deduce new facts on knowledge graph (KG), a reasoning system learns from the graph structure and collects local evidence to find the answer. However, existing methods suffer from a severe scalability problem due to the utilization of the whole KG for reasoning, which hinders their promise on large-scale KG and cannot be directly addressed by vanilla sampling methods. In this work, we propose the one-shot subgraph reasoning to achieve efficient as well as adaptive KG reasoning. The design principle is that, instead of directly acting on the whole KG, the reasoning procedure is decoupled into two steps, i.e., (i) extracting only one query-dependent subgraph and (ii) reasoning on this single subgraph. We reveal that the non-parametric and computation-efficient heuristics Personalized PageRank (PPR) can effectively identify the potential answers and supports to the reasoning. With the promoted efficiency, we further introduce the subgraph-based searching of optimal configurations in both data and model spaces. Empirically, our method achieves promoted efficiency and also leading performances on five large-scale benchmarks.", @@ -17237,7 +17237,7 @@ "authors": [ "Zihao Wang", "Eshaan Nichani", - "Jason Lee" + "Jason D. Lee" ], "abstract": "We study the problem of learning hierarchical polynomials over the standard Gaussian distribution with three-layer neural networks. We specifically consider target functions of the form $h = g \\circ p$ where $p : \\mathbb{R}^d \\rightarrow \\mathbb{R}$ is a degree $k$ polynomial and $g: \\mathbb{R} \\rightarrow \\mathbb{R}$ is a degree $q$ polynomial. This function class generalizes the single-index model, which corresponds to $k=1$, and is a natural class of functions possessing an underlying hierarchical structure. Our main result shows that for a large subclass of degree $k$ polynomials $p$, a three-layer neural network trained via layerwise gradient descent on the square loss learns the target $h$ up to vanishing test error in $\\widetilde O(d^k)$ samples and polynomial time. This is a strict improvement over kernel methods, which require $\\widetilde \\Theta(d^{kq})$ samples, as well as existing guarantees for two-layer networks, which require the target function to be low-rank. Our result also generalizes prior works on three-layer neural networks, which were restricted to the case of $p$ being a quadratic. When $p$ is indeed a quadratic, we achieve the information-theoretically optimal sample complexity $\\widetilde O(d^2)$, which is an improvement over prior work (Nichani et al., 2023) requiring a sample size of $\\widetilde\\Theta(d^4)$. Our proof proceeds by showing that during the first stage of training the network performs feature learning to recover the feature $p$ with $\\widetilde O(d^k)$ samples. This work demonstrates the ability of three-layer neural networks to learn complex features and as a result learn a broad class of hierarchical functions.", "type": "Poster", @@ -17313,17 +17313,17 @@ "Bin Zhu", "Bin Lin", "Munan Ning", - "YANG YAN", + "Yang Yan", "Jiaxi Cui", "WANG HongFa", "Yatian Pang", "Wenhao Jiang", "Junwu Zhang", "Zongwei Li", - "Cai Zhang", + "Cai Wan Zhang", "Zhifeng Li", "Wei Liu", - "Yuan Li" + "Li Yuan" ], "abstract": "The video-language (VL) pretraining has achieved remarkable improvement in multiple downstream tasks. However, the current VL pretraining framework is hard to extend to multiple modalities (N modalities, $N\\geq3$) beyond vision and language. We thus propose LanguageBind, taking the language as the bind across different modalities because the language modality is well-explored and contains rich semantics. Specifically, we freeze the language encoder acquired by VL pretraining, then train encoders for other modalities with contrastive learning. As a result, all modalities are mapped to a shared feature space, implementing multi-modal semantic alignment. While LanguageBind ensures that we can extend VL modalities to N modalities, we also need a high-quality dataset with alignment data pairs centered on language. We thus propose VIDAL-10M with Video, Infrared, Depth, Audio and their corresponding Language, naming as VIDAL-10M. In our VIDAL-10M, all videos are from short video platforms with complete semantics rather than truncated segments from long videos, and all the video, depth, infrared, and audio modalities are aligned to their textual descriptions. After pretraining on VIDAL-10M, we outperform ImageBind by 1.2% R@1 on the MSR-VTT dataset with only 15% of the parameters in the zero-shot video-text retrieval, validating the high quality of our dataset. Beyond this, our LanguageBind has achieved great improvement in the zero-shot video, audio, depth, and infrared understanding tasks. For instance, on the LLVIP and NYU-D datasets, LanguageBind outperforms ImageBind-huge with 23.8% and 11.1% top-1 accuracy.", "type": "Poster", @@ -17340,7 +17340,7 @@ "authors": [ "Samyadeep Basu", "Nanxuan Zhao", - "Vlad Morariu", + "Vlad I Morariu", "Soheil Feizi", "Varun Manjunatha" ], @@ -17378,10 +17378,10 @@ "CHEN CHEN", "Ruizhe Li", "Yuchen Hu", - "Sabato Siniscalchi", + "Sabato Marco Siniscalchi", "Pin-Yu Chen", "Ensiong Chng", - "Huck Yang" + "Chao-Han Huck Yang" ], "abstract": "Recent studies have successfully shown that large language models (LLMs) can be successfully used for generative error correction (GER) on top of the automatic speech recognition (ASR) output. Specifically, an LLM is utilized to carry out a direct mapping from the N-best hypotheses list generated by an ASR system to the predicted output transcription. However, despite its effectiveness, GER introduces extra data uncertainty since the LLM is trained without taking into account acoustic information available in the speech signal. In this work, we aim to overcome such a limitation by infusing acoustic information before generating the predicted transcription through a novel late fusion solution termed Uncertainty-Aware Dynamic Fusion (UADF). UADF is a multimodal fusion approach implemented into an auto-regressive decoding process and works in two stages: (i) It first analyzes and calibrates the token-level LLM decision, and (ii) it then dynamically assimilates the information from the acoustic modality. Experimental evidence collected from various ASR tasks shows that UADF surpasses existing fusion mechanisms in several ways. It yields significant improvements in word error rate (WER) while mitigating data uncertainty issues in LLM and addressing the poor generalization relied with sole modality during fusion. We also demonstrate that UADF seamlessly adapts to audio-visual speech recognition.", "type": "Poster", @@ -17431,10 +17431,10 @@ "title": "Pseudo-Generalized Dynamic View Synthesis from a Video", "authors": [ "Xiaoming Zhao", - "R Colburn", + "R Alex Colburn", "Fangchang Ma", - "MIGUEL ANGEL BAUTISTA", - "Joshua Susskind", + "Miguel \u00c1ngel Bautista", + "Joshua M. Susskind", "Alex Schwing" ], "abstract": "Rendering scenes observed in a monocular video from novel viewpoints is a challenging problem. For static scenes the community has studied both scene-specific optimization techniques, which optimize on every test scene, and generalized techniques, which only run a deep net forward pass on a test scene. In contrast, for dynamic scenes, scene-specific optimization techniques exist, but, to our best knowledge, there is currently no generalized method for dynamic novel view synthesis from a given monocular video. To explore whether generalized dynamic novel view synthesis from monocular videos is possible today, we establish an analysis framework based on existing techniques and work toward the generalized approach. We find a pseudo-generalized process without scene-specific \\emph{appearance} optimization is possible, but geometrically and temporally consistent depth estimates are needed. Despite no scene-specific appearance optimization, the pseudo-generalized approach improves upon some scene-specific methods.For more information see project page at https://xiaoming-zhao.github.io/projects/pgdvs.", @@ -17470,7 +17470,7 @@ }, { "id": 18659, - "title": "A Generative Pre-Training Framework for Spatio-Temporal Graph Transfer Learning", + "title": "Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation", "authors": [ "Yuan Yuan", "Chenyang Shao", @@ -17491,7 +17491,7 @@ "id": 18658, "title": "Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI", "authors": [ - "Wei-Bang Jiang", + "Weibang Jiang", "Liming Zhao", "Bao-liang Lu" ], @@ -17506,7 +17506,7 @@ }, { "id": 18657, - "title": "MetaTool Benchmark: Deciding Whether to Use Tools and Which to Use", + "title": "MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use", "authors": [ "Yue Huang", "Jiawen Shi", @@ -17517,7 +17517,7 @@ "Yixin Liu", "Pan Zhou", "Yao Wan", - "Neil Gong", + "Neil Zhenqiang Gong", "Lichao Sun" ], "abstract": "Large language models (LLMs) have garnered significant attention due to their impressive natural language processing (NLP) capabilities. Recently, many studies have focused on the tool utilization ability of LLMs. They primarily investigated how LLMs effectively collaborate with given specific tools. However, in scenarios where LLMs serve as intelligent agents, as seen in applications like AutoGPT and MetaGPT, LLMs are expected to engage in intricate decision-making processes that involve deciding whether to employ a tool and selecting the most suitable tool(s) from a collection of available tools to fulfill user requests. Therefore, in this paper, we introduce MetaTool, a benchmark designed to evaluate whether LLMs have tool usage awareness and can correctly choose tools. Specifically, we create a dataset called ToolE within the benchmark. This dataset contains various types of user queries in the form of prompts that trigger LLMs to use tools, including both single-tool and multi-tool scenarios. Subsequently, we set the tasks for both tool usage awareness and tool selection. We define four subtasks from different perspectives in tool selection, including tool selection with similar choices, tool selection in specific scenarios, tool selection with possible reliability issues, and multi-tool selection. We conduct experiments involving nine popular LLMs and find that the majority of them still struggle to effectively select tools, highlighting the existing gaps between LLMs and genuine intelligent agents. However, through the error analysis, we found there is still significant room for improvement. Finally, we conclude with insights for tool developers that follow ChatGPT to provide detailed descriptions that can enhance the tool selection performance of LLMs.", @@ -17607,7 +17607,7 @@ }, { "id": 18651, - "title": "Don't Judge by the Look: A Motion Coherent Augmentation for Video Recognition", + "title": "Don't Judge by the Look: Towards Motion Coherent Video Representation", "authors": [ "Yitian Zhang", "Yue Bai", @@ -17665,7 +17665,7 @@ "authors": [ "Mahsa Keramati", "Lili Meng", - "R. Evans" + "R. David Evans" ], "abstract": "Imbalanced distributions are ubiquitous in real-world data. They create constraints on Deep Neural Networks to represent the minority labels and avoid bias towards majority labels. The extensive body of imbalanced approaches address categorical label spaces but fail to effectively extend to regression problems where the label space is continuous. Local and global correlations among continuous labels provide valuable insights towards effectively modelling relationships in feature space. In this work, we propose ConR, a contrastive regularizer that models global and local label similarities in feature space and prevents the features of minority samples from being collapsed into their majority neighbours. ConR discerns the disagreements between the label space and feature space, and imposesa penalty on these disagreements. ConR minds the continuous nature of label space with two main strategies in a contrastive manner: incorrect proximities are penalized proportionate to the label similarities and the correct ones are encouraged to model local similarities. ConR consolidates essential considerations into a generic, easy-to-integrate, and efficient method that effectively addresses deep imbalanced regression. Moreover, ConR is orthogonal to existing approaches and smoothly extends to uni- and multi-dimensional label spaces. Our comprehensive experiments show that ConR significantly boosts the performance of all the state-of-the-art methods on four large-scale deep imbalanced regression benchmarks.", "type": "Poster", @@ -17757,11 +17757,11 @@ }, { "id": 18644, - "title": "Efficient Instance-Optimal Finite-Sum Minimization", + "title": "Efficient Continual Finite-Sum Minimization", "authors": [ "Ioannis Mavrothalassitis", "Stratis Skoulakis", - "Leello Dadi", + "Leello Tadesse Dadi", "Volkan Cevher" ], "abstract": "Given a sequence of functions $f_1,\\ldots,f_n$ with $f_i:\\mathcal{D}\\mapsto \\mathbb{R}$, finite-sum minimization seeks a point ${x}^\\star \\in \\mathcal{D}$ minimizing $\\sum_{j=1}^nf_j(x)/n$. In this work, we propose a key twist into the finite-sum minimization, dubbed as *instance-optimal finite-sum minimization*, that asks for a sequence of points $x_1^\\star, \\ldots, x_n^\\star \\in D$ such that each ${x}^\\star_i \\in D$ minimizes the prefix-sum $\\sum_{j=1}^if_j(x)/i$. Assuming that each prefix-sum is strongly convex, we develop a first-order stochastic instance optimal gradient method $\\mathrm{SIOPT}-\\mathrm{Grad}$ producing an $\\epsilon$-optimal sequence with $\\tilde{\\mathcal{O}}(n/\\epsilon^{1/3} + 1/\\sqrt{\\epsilon})$ overall *first-order oracles* (FO). An FO corresponds to the computation of a single gradient $\\nabla f_j(x)$ at a given $x \\in \\mathcal{D}$ for some $j \\in [n]$. Our approach significantly improves upon the $\\mathcal{O}(n/\\epsilon)$ FOs that $\\mathrm{StochasticGradientDescent}$ requires and the $\\mathcal{O}(n^2 \\log (1/\\epsilon))$ FOs that state-of-the-art variance reduction methods such as $\\mathrm{Katyusha}$ require. We also prove that there is no natural first-order method with $\\mathcal{O}\\left(n/\\epsilon^\\alpha\\right)$ gradient complexity for $\\alpha < 1/4$, establishing that the first-order complexity of our method is nearly tight.", @@ -17781,7 +17781,7 @@ "Jesse Zhang", "Kavosh Asadi", "Yao Liu", - "DING ZHAO", + "Ding Zhao", "Shoham Sabach", "Rasool Fakoor" ], @@ -17801,7 +17801,7 @@ "Nanda H Krishna", "Colin Bredenberg", "Daniel Levenstein", - "Blake A Richards", + "Blake Aaron Richards", "Guillaume Lajoie" ], "abstract": "During periods of quiescence, such as sleep, neural activity in many brain circuits resembles that observed during periods of task engagement. However, the precise conditions under which task-optimized networks can autonomously reactivate the same network states responsible for online behavior is poorly understood. In this study, we develop a mathematical framework that outlines sufficient conditions for the emergence of neural reactivation in circuits that encode features of smoothly varying stimuli. We demonstrate mathematically that noisy recurrent networks optimized to track environmental state variables using change-based sensory information naturally develop denoising dynamics, which, in the absence of input, cause the network to revisit state configurations observed during periods of online activity. We validate our findings using numerical experiments on two canonical neuroscience tasks: spatial position estimation based on self-motion cues, and head direction estimation based on angular velocity cues. Overall, our work provides theoretical support for modeling offline reactivation as an emergent consequence of task optimization in noisy neural circuits.", @@ -17815,11 +17815,11 @@ }, { "id": 18639, - "title": "Zero-Mean Regularized Spectral Contrastive Learning", + "title": "Zero-Mean Regularized Spectral Contrastive Learning: Implicitly Mitigating Wrong Connections in Positive-Pair Graphs", "authors": [ "Xiong Zhou", "Xianming Liu", - "feilong zhang", + "Feilong Zhang", "Gang Wu", "Deming Zhai", "Junjun Jiang", @@ -17877,7 +17877,7 @@ }, { "id": 18636, - "title": "Hierarchical Data-efficient Representation Learning for Tertiary Structure-based RNA Design", + "title": "RDesign: Hierarchical Data-efficient Representation Learning for Tertiary Structure-based RNA Design", "authors": [ "Cheng Tan", "Yijie Zhang", @@ -17885,7 +17885,7 @@ "Bozhen Hu", "Siyuan Li", "Zicheng Liu", - "Stan Z Li" + "Stan Z. Li" ], "abstract": "While artificial intelligence has made remarkable strides in revealing the relationship between biological macromolecules' primary sequence and tertiary structure, designing RNA sequences based on specified tertiary structures remains challenging. Though existing approaches in protein design have thoroughly explored structure-to-sequence dependencies in proteins, RNA design still confronts difficulties due to structural complexity and data scarcity. Adding to the problem, direct transplantation of protein design methodologies into RNA design fails to achieve satisfactory outcomes although sharing similar structural components. In this study, we aim to systematically construct a data-driven RNA design pipeline. We crafted a large, well-curated benchmark dataset and designed a comprehensive structural modeling approach to represent the complex RNA tertiary structure. More importantly, we proposed a hierarchical data-efficient representation learning framework that learns structural representations through contrastive learning at both cluster-level and sample-level to fully leverage the limited data. By constraining data representations within a limited hyperspherical space, the intrinsic relationships between data points could be explicitly imposed. Moreover, we incorporated extracted secondary structures with base pairs as prior knowledge to facilitate the RNA design process. Extensive experiments demonstrate the effectiveness of our proposed method, providing a reliable baseline for future RNA design tasks. The source code and benchmark dataset will be released publicly.", "type": "Poster", @@ -17898,7 +17898,7 @@ }, { "id": 18635, - "title": "Learning to make adherence-aware advice", + "title": "Learning to Make Adherence-aware Advice", "authors": [ "Guanting Chen", "Xiaocheng Li", @@ -17975,7 +17975,7 @@ "authors": [ "Xingyu Liu", "Deepak Pathak", - "DING ZHAO" + "Ding Zhao" ], "abstract": "We investigate the problem of transferring an expert policy from a source robot to multiple different robots. To solve this problem, we propose a method named *Meta-Evolve* that uses continuous robot evolution to efficiently transfer the policy to each target robot through a set of tree-structured evolutionary robot sequences. The robot evolution tree allows the robot evolution paths to be shared, so our approach can significantly outperform naive one-to-one policy transfer. We present a heuristic approach to determine an optimized robot evolution tree. Experiments have shown that our method is able to improve the efficiency of one-to-three transfer of manipulation policy by up to 3.2$\\times$ and one-to-six transfer of agile locomotion policy by 2.4$\\times$ in terms of simulation cost over the baseline of launching multiple independent one-to-one policy transfers. Supplementary videos available at the project website: https://sites.google.com/view/meta-evolve.", "type": "Poster", @@ -17988,7 +17988,7 @@ }, { "id": 18627, - "title": "A Benchmark on Robust Semi-Supervised Learning in Open Environments", + "title": "Realistic Evaluation of Semi-supervised Learning Algorithms in Open Environments", "authors": [ "Lin-Han Jia", "Lan-Zhe Guo", @@ -18010,16 +18010,16 @@ "authors": [ "Yanai Elazar", "Akshita Bhagia", - "Ian Magnusson", + "Ian Helgi Magnusson", "Abhilasha Ravichander", "Dustin Schwenk", "Alane Suhr", - "Pete Walsh", + "Evan Pete Walsh", "Dirk Groeneveld", "Luca Soldaini", "Sameer Singh", "Hannaneh Hajishirzi", - "Noah Smith", + "Noah A. Smith", "Jesse Dodge" ], "abstract": "Large text corpora are the backbone of language models.However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion of evaluation data (contamination).In this work, we propose What's In My Big Data? (WIMBD), a platform and a set of 16 high-level analyses that allow us to reveal and compare the contents of large text corpora. WIMBD builds on two basic capabilities---count and search---*at scale*, which allows us to analyze more than 35 terabytes on a standard compute node. We apply WIMBD to 10 different corpora used to train popular language models, including *C4*, *The Pile*, and *RedPajama*.Our analysis uncovers several surprising and previously undocumented findings about these corpora, including the high prevalence of duplicate, synthetic, and low-quality content, personally identifiable information, toxic language, and benchmark contamination. For instance, we find that about 50% of the documents in *RedPajama* and *LAION-2B-en* are duplicates. In addition, several datasets used for benchmarking models trained on such corpora are contaminated with respect to important benchmarks, including the Winograd Schema Challenge and parts of GLUE and SuperGLUE.We open-source WIMBD code and artifacts to provide a standard set of evaluations for new text-based corpora and to encourage more analyses and transparency around them.", @@ -18037,15 +18037,15 @@ "authors": [ "Josue Ortega Caro", "Antonio Henrique de Oliveira Fonseca", - "Christopher Averill", - "Syed Rizvi", + "Syed A Rizvi", "Matteo Rosati", - "James Cross", + "Christopher Averill", + "James L Cross", "Prateek Mittal", "Emanuele Zappala", - "Rahul Dhodapkar", + "Rahul Madhav Dhodapkar", "Chadi Abdallah", - "David Dijk" + "David van Dijk" ], "abstract": "We introduce the Brain Language Model (BrainLM), a foundation model for brain activity dynamics trained on 6,700 hours of fMRI recordings. Utilizing self-supervised masked-prediction training, BrainLM demonstrates proficiency in both fine-tuning and zero-shot inference tasks. Fine-tuning allows for the accurate prediction of clinical variables like age, anxiety, and PTSD as well as forecasting of future brain states. Critically, the model generalizes well to entirely new external cohorts not seen during training. In zero-shot inference mode, BrainLM can identify intrinsic functional networks directly from raw fMRI data without any network-based supervision during training. The model also generates interpretable latent representations that reveal relationships between brain activity patterns and cognitive states. Overall, BrainLM offers a versatile and interpretable framework for elucidating the complex spatiotemporal dynamics of human brain activity. It serves as a powerful \"lens\" through which massive repositories of fMRI data can be analyzed in new ways, enabling more effective interpretation and utilization at scale. The work demonstrates the potential of foundation models to advance computational neuroscience research.", "type": "Poster", @@ -18105,7 +18105,7 @@ "Tsu-Jui Fu", "Wenze Hu", "Xianzhi Du", - "William Wang", + "William Yang Wang", "Yinfei Yang", "Zhe Gan" ], @@ -18159,7 +18159,7 @@ "id": 18617, "title": "Complex priors and flexible inference in recurrent circuits with dendritic nonlinearities", "authors": [ - "Benjamin Lyo", + "Benjamin S. H. Lyo", "Cristina Savin" ], "abstract": "Despite many successful examples in which probabilistic inference can account for perception, we have little understanding of how the brain represents and uses structured priors that capture the complexity of natural input statistics. Here we construct a recurrent circuit model that can implicitly represent priors over latent variables, and combine them with sensory and contextual sources of information to encode task-specific posteriors. Inspired by the recent success of diffusion models as means of learning and using priors over images, our model uses dendritic nonlinearities optimized for denoising, and stochastic somatic integration with the degree of noise modulated by an oscillating global signal. Combining these elements into a recurrent network yields a dynamical system that samples from the prior at a rate prescribed by the period of the global oscillator. Additional inputs reflecting sensory or top-down contextual information alter these dynamics to generate samples from the corresponding posterior, with different input gating patterns selecting different inference tasks. We demonstrate that this architecture can sample from low dimensional nonlinear manifolds and multimodal posteriors. Overall, the model provides a new framework for circuit-level representation of probabilistic information, in a format that facilitates flexible inference.", @@ -18225,7 +18225,7 @@ }, { "id": 18611, - "title": "Mask-based modeling for Neural Radiance Fields", + "title": "Mask-Based Modeling for Neural Radiance Fields", "authors": [ "Ganlin Yang", "Guoqiang Wei", @@ -18250,7 +18250,7 @@ "Chenhao Zhang", "Yawen Zhao", "Alina Bialkowski", - "Weitong Chen", + "Weitong Tony Chen", "Miao Xu" ], "abstract": "Machine unlearning aims to remove information derived from forgotten data while preserving that of the remaining dataset in a well-trained model. With the increasing emphasis on data privacy, several approaches to machine unlearning have emerged. However, these methods typically rely on complete supervision throughout the unlearning process. Unfortunately, obtaining such supervision, whether for the forgetting or remaining data, can be impractical due to the substantial cost associated with annotating real-world datasets. This challenge prompts us to propose a supervision-free unlearning approach that operates without the need for labels during the unlearning process. Specifically, we introduce a variational approach to approximate the distribution of representations for the remaining data. Leveraging this approximation, we adapt the original model to eliminate information from the forgotten data at the representation level. To further address the issue of lacking supervision information, which hinders alignment with ground truth, we introduce a contrastive loss to facilitate the matching of representations between the remaining data and those of the original model, thus preserving predictive performance. Experimental results across various unlearning tasks demonstrate the effectiveness of our proposed method, Label-Agnostic Forgetting (LAF) without using any labels, which achieves comparable performance to state-of-the-art methods that rely on full supervision information. Furthermore, our approach excels in semi-supervised scenarios, leveraging limited supervision information to outperform fully supervised baselines. This work not only showcases the viability of supervision-free unlearning in deep models but also opens up a new possibility for future research in unlearning at the representation level.", @@ -18267,7 +18267,7 @@ "title": "Interventional Fairness on Partially Known Causal Graphs: A Constrained Optimization Approach", "authors": [ "Aoqi Zuo", - "yiqing li", + "Yiqing Li", "Susan Wei", "Mingming Gong" ], @@ -18365,7 +18365,7 @@ "Heril Changwal", "Milan Aggarwal", "Sumit Bhatia", - "Yaman Singla", + "Yaman Kumar", "Balaji Krishnamurthy" ], "abstract": "Table understanding capability of Large Language Models (LLMs) has been extensively studied through the task of question-answering (QA) over tables. Typically, only a small part of the whole table is relevant to derive the answer for a given question. The irrelevant parts act as noise and are distracting information, resulting in sub-optimal performance due to the vulnerability of LLMs to noise. To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) \u2013 a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information. CABINET comprises an Unsupervised Relevance Scorer (URS), trained differentially with the QA LLM, that weighs the table content based on its relevance to the input question before feeding it to the question answering LLM (QA LLM). To further aid the relevance scorer, CABINET employs a weakly supervised module that generates a parsing statement describing the criteria of rows and columns relevant to the question and highlights the content of corresponding table cells. CABINET significantly outperforms various tabular LLM baselines, as well as GPT3-based in-context learning methods, is more robust to noise, maintains outperformance on tables of varying sizes, and establishes new SoTA performance on WikiTQ, FeTaQA, and WikiSQL datasets. We release our code and datasets here.", @@ -18465,7 +18465,7 @@ "Justin Dumouchelle", "Esther Julien", "Jannis Kurtz", - "Elias Khalil" + "Elias Boutros Khalil" ], "abstract": "Robust optimization provides a mathematical framework for modeling and solving decision-making problems under worst-case uncertainty. This work addresses two-stage robust optimization (2RO) problems (also called adjustable robust optimization), wherein first-stage and second-stage decisions are made before and after uncertainty is realized, respectively. This results in a nested min-max-min optimization problem which is extremely challenging computationally, especially when the decisions are discrete. We propose Neur2RO, an efficient machine learning-driven instantiation of column-and-constraint generation (CCG), a classical iterative algorithm for 2RO. Specifically, we learn to estimate the value function of the second-stage problem via a novel neural network architecture that is easy to optimize over by design. Embedding our neural network into CCG yields high-quality solutions quickly as evidenced by experiments on two 2RO benchmarks, knapsack and capital budgeting. On small or easy instances, Neur2RO recovers solutions of nearly the same quality as state-of-the-art methods but is most advantageous on large-scale instances, where it finds better solutions on average.", "type": "Poster", @@ -18484,10 +18484,10 @@ "authors": [ "Sumeet Batra", "Bryon Tjanaka", - "Matthew Fontaine", + "Matthew Christopher Fontaine", "Aleksei Petrenko", "Stefanos Nikolaidis", - "Gaurav Sukhatme" + "Gaurav S. Sukhatme" ], "abstract": "Training generally capable agents that thoroughly explore their environment andlearn new and diverse skills is a long-term goal of robot learning. Quality DiversityReinforcement Learning (QD-RL) is an emerging research area that blends thebest aspects of both fields \u2013 Quality Diversity (QD) provides a principled formof exploration and produces collections of behaviorally diverse agents, whileReinforcement Learning (RL) provides a powerful performance improvementoperator enabling generalization across tasks and dynamic environments. ExistingQD-RL approaches have been constrained to sample efficient, deterministic off-policy RL algorithms and/or evolution strategies and struggle with highly stochasticenvironments. In this work, we, for the first time, adapt on-policy RL, specificallyProximal Policy Optimization (PPO), to the Differentiable Quality Diversity (DQD)framework and propose several changes that enable efficient optimization anddiscovery of novel skills on high-dimensional, stochastic robotics tasks. Our newalgorithm, Proximal Policy Gradient Arborescence (PPGA), achieves state-of-the-art results, including a 4x improvement in best reward over baselines on thechallenging humanoid domain.", "type": "Spotlight Poster", @@ -18503,7 +18503,7 @@ "title": "Harnessing Density Ratios for Online Reinforcement Learning", "authors": [ "Philip Amortila", - "Dylan Foster", + "Dylan J Foster", "Nan Jiang", "Ayush Sekhari", "Tengyang Xie" @@ -18525,7 +18525,7 @@ "Gwangsu Kim", "Junghyun Lee", "Jinwoo Shin", - "Chang Yoo" + "Chang D. Yoo" ], "abstract": "Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data. One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is. The sample's distance to the decision boundary is a natural measure of predictive uncertainty, but it is often intractable to compute, especially for complex decision boundaries formed in multiclass classification tasks.To address this issue, this paper proposes the *least disagree metric* (LDM), defined as the smallest probability of disagreement of the predicted label, and an estimator for LDM proven to be asymptotically consistent under mild assumptions. The estimator is computationally efficient and can be easily implemented for deep learning models using parameter perturbation. The LDM-based active learning is performed by querying unlabeled data with the smallest LDM. Experimental results show that our LDM-based active learning algorithm obtains state-of-the-art *overall* performance on all considered datasets and deep architectures.", "type": "Poster", @@ -18560,7 +18560,7 @@ "Yunhao Ni", "Tengwei Song", "Jie Luo", - "Rao Anwer", + "Rao Muhammad Anwer", "Salman Khan", "Fahad Khan", "Lei Huang" @@ -18615,7 +18615,7 @@ }, { "id": 18571, - "title": "Distributionally Robust Optimization with Bias & Variance Reduced Gradients", + "title": "Distributionally Robust Optimization with Bias and Variance Reduction", "authors": [ "Ronak Mehta", "Vincent Roulet", @@ -18696,7 +18696,7 @@ "Li Ren", "Chen Chen", "Liqiang Wang", - "Kien Hua" + "Kien A. Hua" ], "abstract": "Deep Metric Learning (DML) has long attracted the attention of the machine learning community as a key objective. Existing solutions concentrate on fine-tuning the pre-trained models on conventional image datasets. As a result of the success of recent pre-trained models derived from larger-scale datasets, it is challenging to adapt the model to the DML tasks in the local data domain while retaining the previously gained knowledge. In this paper, we investigate parameter-efficient methods for fine-tuning the pre-trained model for DML tasks. In particular, we propose a novel and effective framework based on learning Visual Prompts (VPT) in the pre-trained Vision Transformers (ViT). Based on the conventional proxy-based DML paradigm, we augment the proxy by incorporating the semantic information from the input image and the ViT, in which we optimize the visual prompts for each class. We demonstrate that our new approximations with semantic information are superior to representative capabilities, thereby improving metric learning performance. We conduct extensive experiments to demonstrate that our proposed framework is superior and efficient by evaluating popular DML benchmarks. In particular, we demonstrate that our fine-tuning method achieves comparable or even better performance than recent state-of-the-art full fine-tuning works of DML while tuning only a small percentage of total parameters.", "type": "Poster", @@ -18768,7 +18768,7 @@ "Yujia Xie", "Hongyin Luo", "Yoon Kim", - "James R Glass", + "James R. Glass", "Pengcheng He" ], "abstract": "Despite their impressive capabilities, large language models (LLMs) are prone to hallucinations, i.e., generating content that deviates from facts seen during pretraining. We propose a simple decoding strategy for reducing hallucinations with pretrained LLMs that does not require conditioning on retrieved external knowledge nor additional fine-tuning. Our approach obtains the next-token distribution by contrasting the differences in logits obtained from projecting the later layers versus earlier layers to the vocabulary space, exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers. We find that this **D**ecoding by C**o**ntrasting **La**yers (DoLa) approach is able to better surface factual knowledge and reduce the generation of incorrect facts. DoLa consistently improves the truthfulness across multiple choices tasks and open-ended generation tasks, for example improving the performance of LLaMA family models on TruthfulQA by 12-17% absolute points, demonstrating its potential in making LLMs reliably generate truthful facts.", @@ -18879,7 +18879,7 @@ "title": "Generative Adversarial Equilibrium Solvers", "authors": [ "Denizalp Goktas", - "David Parkes", + "David C. Parkes", "Ian Gemp", "Luke Marris", "Georgios Piliouras", @@ -18902,9 +18902,9 @@ "authors": [ "Yingtao Zhang", "Haoli Bai", - "Jialin Zhao", "Haokun Lin", - "LU HOU", + "Jialin Zhao", + "Lu Hou", "Carlo Vittorio Cannistraci" ], "abstract": "With the rapid growth of large language models (LLMs), there is increasing demand for memory and computation for LLMs. Recent efforts on post-training pruning of LLMs aim to reduce the model size and computation, yet the performance is still sub-optimal. In this paper, we present a plug-and-play solution for post-training pruning of LLMs.The proposed solution has two innovative components: 1) **Relative Importance and Activations** (RIA), a new pruning metric that jointly considers the weight and activations efficiently on LLMs; and 2) **Channel Permutation**, a new approach to maximally preserve important weights under N:M sparsity.The proposed two components can be readily combined to further enhance the N:M structuredly pruned LLMs.Our empirical experiments show that RIA alone can already surpass all existing post-training pruning methods on prevalent LLMs, e.g., LLaMA ranging from 7B to 65B. Furthermore, N:M structured pruning with channel permutation can even outperform the original LLaMA2 70B on zero-shot tasks, together with practical speed-up on specific hardware.", @@ -18923,7 +18923,7 @@ "Ashmit Khandelwal", "Aditya Agrawal", "Aanisha Bhattacharyya", - "Yaman Singla", + "Yaman Kumar", "Somesh Singh", "Uttaran Bhattacharya", "Ishita Dasgupta", @@ -18979,7 +18979,7 @@ }, { "id": 18543, - "title": "Defending Against Transfer Attacks From Public Models", + "title": "PubDef: Defending Against Transfer Attacks From Public Models", "authors": [ "Chawin Sitawarin", "Jaewon Chang", @@ -19037,7 +19037,7 @@ "id": 18540, "title": "Safe RLHF: Safe Reinforcement Learning from Human Feedback", "authors": [ - "Juntao Dai", + "Josef Dai", "Xuehai Pan", "Ruiyang Sun", "Jiaming Ji", @@ -19138,7 +19138,7 @@ "Xiangyu Kong", "Junqi Wang", "Bangcheng Yang", - "pring wong", + "Pring Wong", "Yifan Zhong", "Xiaoyuan Zhang", "Zhaowei Zhang", @@ -19198,7 +19198,7 @@ "id": 18527, "title": "Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals", "authors": [ - "Yair Gat", + "Yair Ori Gat", "Nitay Calderon", "Amir Feder", "Alexander Chapanin", @@ -19256,7 +19256,7 @@ "Junyoung Seo", "Wooseok Jang", "Min-Seop Kwak", - "In\u00e8s Hyeonsu Kim", + "Hyeonsu Kim", "Jaehoon Ko", "Junho Kim", "Jin-Hwa Kim", @@ -19274,13 +19274,13 @@ }, { "id": 18522, - "title": "The Discovery of Binding Modes Requires Rethinking Docking Generalization", + "title": "Deep Confident Steps to New Pockets: Strategies for Docking Generalization", "authors": [ "Gabriele Corso", "Arthur Deng", "Nicholas Polizzi", "Regina Barzilay", - "Tommi Jaakkola" + "Tommi S. Jaakkola" ], "abstract": "Accurate blind docking has the potential to lead to new biological breakthroughs, but for this promise to be realized, it is critical that docking methods generalize well across the proteome. However, existing benchmarks fail to rigorously assess generalizability. Therefore, we develop DockGen, a new benchmark based on the ligand-binding domains of proteins, and we show that machine learning-based docking models have very weak generalization abilities even when combined with various data augmentation strategies. Instead, we propose Confidence Bootstrapping, a new training paradigm that solely relies on the interaction between a diffusion and a confidence model. Unlike previous self-training methods from other domains, we directly exploit the multi-resolution generation process of diffusion models using rollouts and confidence scores to reduce the generalization gap. We demonstrate that Confidence Bootstrapping significantly improves the ability of ML-based docking methods to dock to unseen protein classes, edging closer to accurate and generalizable blind docking methods.", "type": "Poster", @@ -19295,7 +19295,7 @@ "id": 18521, "title": "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models", "authors": [ - "YEFEI HE", + "Yefei He", "Jing Liu", "Weijia Wu", "Hong Zhou", @@ -19365,7 +19365,7 @@ "Aristide Baratin", "Jonathan Cornford", "Stefan Mihalas", - "Eric SheaBrown", + "Eric Todd SheaBrown", "Guillaume Lajoie" ], "abstract": "In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity generally has a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights \u2014 in particular their effective rank \u2014 influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.", @@ -19383,7 +19383,7 @@ "authors": [ "Ivan Grega", "Ilyes Batatia", - "G\u00e1bor Cs\u00e1nyi", + "Gabor Csanyi", "Sri Karlapati", "Vikram Deshpande" ], @@ -19398,7 +19398,7 @@ }, { "id": 17633, - "title": "Whole-song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models", + "title": "Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models", "authors": [ "Ziyu Wang", "Lejun Min", @@ -19453,7 +19453,7 @@ "id": 17630, "title": "VFLAIR: A Research Library and Benchmark for Vertical Federated Learning", "authors": [ - "TIANYUAN ZOU", + "Tianyuan Zou", "Zixuan GU", "Yu He", "Hideaki Takahashi", @@ -19473,7 +19473,7 @@ }, { "id": 17628, - "title": "BatteryML:An Open-source platform for Machine Learning on Battery Degradation", + "title": "BatteryML: An Open-source Platform for Machine Learning on Battery Degradation", "authors": [ "Han Zhang", "Xiaofan Gui", @@ -19501,7 +19501,7 @@ "Shiyu Wang", "Lintao Ma", "Zhixuan Chu", - "james zhang", + "James Y. Zhang", "Xiaoming Shi", "Pin-Yu Chen", "Yuxuan Liang", @@ -19523,7 +19523,7 @@ "title": "Fourier Transporter: Bi-Equivariant Robotic Manipulation in 3D", "authors": [ "Haojie Huang", - "Owen Howell", + "Owen Lewis Howell", "Dian Wang", "Xupeng Zhu", "Robert Platt", @@ -19540,7 +19540,7 @@ }, { "id": 19495, - "title": "Where We Have Arrived in Proving the Emergence of Sparse Interaction Primitives in AI Models", + "title": "Where We Have Arrived in Proving the Emergence of Sparse Interaction Primitives in DNNs", "authors": [ "Qihan Ren", "Jiayang Gao", @@ -19641,7 +19641,7 @@ "Joey Tianyi Zhou", "Jian Wu", "Wanlu Liu", - "Howard Yang", + "Howard Hao Yang", "Zuozhu Liu" ], "abstract": "Federated Long-Tailed Learning (Fed-LT), a paradigm wherein data collected from decentralized local clients manifests a globally prevalent long-tailed distribution, has garnered considerable attention in recent times. In the context of Fed-LT, existing works have predominantly centered on addressing the data imbalance issue to enhance the efficacy of the generic global model while neglecting the performance at the local level. In contrast, conventional Personalized Federated Learning (pFL) techniques are primarily devised to optimize personalized local models under the presumption of a balanced global data distribution. This paper introduces an approach termed Federated Local and Generic Model Training in Fed-LT (FedLoGe), which enhances both local and generic model performance through the integration of representation learning and classifier alignment within a neural collapse framework. Our investigation reveals the feasibility of employing a shared backbone as a foundational framework for capturing overarching global trends, while concurrently employing individualized classifiers to encapsulate distinct refinements stemming from each client\u2019s local features. Building upon this discovery, we establish the Static Sparse Equiangular Tight Frame Classifier (SSE-C), inspired by neural collapse principles that naturally prune extraneous noisy features and foster the acquisition of potent data representations. Furthermore, leveraging insights from imbalance neural collapse's classifier norm patterns, we develop Global and Local Adaptive Feature Realignment (GLA-FR) via an auxiliary global classifier and personalized Euclidean norm transfer to align global features with client preferences. Extensive experimental results on CIFAR-10/100-LT, ImageNet, and iNaturalist demonstrate the advantage of our method over state-of-the-art pFL and Fed-LT approaches.", @@ -19693,11 +19693,11 @@ "id": 18508, "title": "Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization", "authors": [ - "Jin Zhou", - "Charles Staats", + "Jin Peng Zhou", + "Charles E Staats", "Wenda Li", "Christian Szegedy", - "Kilian Weinberger", + "Kilian Q Weinberger", "Yuhuai Wu" ], "abstract": "Large language models (LLM), such as Google's Minerva and OpenAI's GPT families, are becoming increasingly capable of solving mathematical quantitative reasoning problems. However, they still make unjustified logical and computational errors in their reasoning steps and answers. In this paper, we leverage the fact that if the training corpus of LLMs contained sufficiently many examples of formal mathematics (e.g. in Isabelle, a formal theorem proving environment), they can be prompted to translate i.e. autoformalize informal mathematical statements into formal Isabelle code --- which can be verified automatically for internal consistency. This provides a mechanism to automatically reject solutions whose formalized versions are inconsistent within themselves or with the formalized problem statement. We evaluate our method on GSM8K, MATH and MultiArith datasets and demonstrate that our approach provides a consistently better heuristic than vanilla majority voting --- the previously best method to identify correct answers, by more than 12\\% on GSM8K. In our experiments it improves results consistently across all datasets and LLM model sizes.", @@ -19711,13 +19711,13 @@ }, { "id": 18507, - "title": "GnnX-Bench: Unravelling the Utility of Perturbation-based GNN Explainers through In-depth Benchmarking", + "title": "GNNX-BENCH: Unravelling the Utility of Perturbation-based GNN Explainers through In-depth Benchmarking", "authors": [ "Mert Kosan", "Samidha Verma", "Burouj Armgaan", "Khushbu Pahwa", - "Ambuj K Singh", + "Ambuj Singh", "Sourav Medya", "Sayan Ranu" ], @@ -19874,8 +19874,8 @@ "authors": [ "Pratyush Maini", "Sachin Goyal", - "Zachary Lipton", - "Zico Kolter", + "Zachary Chase Lipton", + "J Zico Kolter", "Aditi Raghunathan" ], "abstract": "Large web-crawled multimodal datasets have powered a slew of new methods for learning general-purpose visual representations, advancing the state of the art in computer vision and revolutionizing zero- and few-shot recognition. One crucial decision facing practitioners is how, if at all, to curate these ever-larger datasets. For example, the creators of the LAION-5B dataset chose to retain only image-caption pairs whose CLIP similarity score exceeded a designated threshold. In this paper, we propose a new state-of-the-art data filtering approach motivated by our observation that nearly $40\\%$ of LAION's images contain text that overlaps significantly with the caption. Intuitively, such data could be wasteful as it incentivizes models to perform optical character recognition rather than learning visual features. However, naively removing all such data could also be wasteful, as it throws away images that contain visual features (in addition to overlapping text). Our simple and scalable approach, T-MARS (Text Masking and Re-Scoring), filters out only those pairs where the text dominates the remaining visual features---by first masking out the text and then filtering out those with a low CLIP similarity score of the masked image with original captions. Experimentally, T-MARS is the top ranked approach on Imagenet at ``medium scale'' of DataComp (a data filtering benchmark), and outperforms CLIP filtering by a margin of $6.5\\%$ on ImageNet and $4.7\\%$ on VTAB. Additionally, we show that the accuracy gains enjoyed by T-MARS linearly increase as data and compute are scaled exponentially.", @@ -19912,7 +19912,7 @@ "Jiajun He", "Gergely Flamich", "Zongyu Guo", - "Jos\u00e9 Miguel Hern\u00e1ndez Lobato" + "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato" ], "abstract": "COMpression with Bayesian Implicit NEural Representations (COMBINER) is a recent data compression method that addresses a key inefficiency of previous Implicit Neural Representation (INR)-based approaches: it avoids quantization and enables direct optimization of the rate-distortion performance. However, COMBINER still has significant limitations: 1) it uses factorized priors and posterior approximations that lack flexibility; 2) it cannot effectively adapt to local deviations from global patterns in the data; and 3) its performance can be susceptible to modeling choices and the variational parameters' initializations. Our proposed method, Robust and Enhanced COMBINER (RECOMBINER), addresses these issues by 1) enriching the variational approximation while maintaining its computational cost via a linear reparameterization of the INR weights, 2) augmenting our INRs with learnable positional encodings that enable them to adapt to local details and 3) splitting high-resolution data into patches to increase robustness and utilizing expressive hierarchical priors to capture dependency across patches. We conduct extensive experiments across several data modalities, showcasing that RECOMBINER achieves competitive results with the best INR-based methods and even outperforms autoencoder-based codecs on low-resolution images at low bitrates.", "type": "Poster", @@ -19931,7 +19931,7 @@ "authors": [ "Mikhail Khodak", "Edmond Chow", - "Nina Balcan", + "Maria Florina Balcan", "Ameet Talwalkar" ], "abstract": "Solving a linear system ${\\bf Ax}={\\bf b}$ is a fundamental scientific computing primitive, and numerous solvers and preconditioners have been developed. These come with parameters whose optimal values depend on the system being solved but are often impossible or too expensive to identify; thus in practice sub-optimal heuristics are used instead. We consider the common setting in which many related linear systems are solved, e.g. during a single numerical simulation. In this scenario, can we sequentially choose parameters that attain a near-optimal overall number of iterations, without extra matrix computations? We answer in the affirmative for Successive Over-Relaxation~(SOR), a standard solver whose parameter $\\omega$ has a strong impact on its runtime. For this method, we prove that a bandit algorithm\u2014using only the number of iterations as feedback\u2014can select parameters for a sequence of instances such that the overall cost is almost as good as that the best fixed $\\omega$ would have obtained. Furthermore, when given additional structural information, we show that a {\\em contextual} bandit method approaches the performance of the {\\em instance-optimal} policy, which selects the best $\\omega$ for each instance. Our work provides the first learning-theoretic treatment of high-precision linear system solvers and the first end-to-end guarantees for data-driven scientific computing, demonstrating theoretically the potential to speed up numerical methods using well-understood learning algorithms.", @@ -20004,7 +20004,7 @@ "title": "A Flexible Generative Model for Heterogeneous Tabular EHR with Missing Modality", "authors": [ "Huan He", - "Yijie Hao", + "William hao", "Yuanzhe Xi", "Yong Chen", "Bradley Malin", @@ -20045,8 +20045,8 @@ "authors": [ "Antonis Antoniades", "Yiyi Yu", - "Joe Canzano", - "William Wang", + "Joe S Canzano", + "William Yang Wang", "Spencer Smith" ], "abstract": "State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis. Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an auto-regressive spatiotemporal generation problem. Neuroformer is a multimodal, multitask generative pre-trained transformer (GPT) model that is specifically designed to handle the intricacies of data in systems neuroscience. It scales linearly with feature size, can process an arbitrary number of modalities, and is adaptable to downstream tasks, such as predicting behavior. We first trained Neuroformer on simulated datasets, and found that it both accurately predicted simulated neuronal circuit activity, and also intrinsically inferred the underlying neural circuit connectivity, including direction. When pretrained to decode neural responses, the model predicted the behavior of a mouse with only few-shot fine-tuning, suggesting that the model begins learning how to do so directly from the neural representations themselves, without any explicit supervision. We used an ablation study to show that joint training on neuronal responses and behavior boosted performance, highlighting the model's ability to associate behavioral and neural representations in an unsupervised manner. These findings show that Neuroformer can analyze neural datasets and their emergent properties, informing the development of models and hypotheses associated with the brain.", @@ -20083,7 +20083,7 @@ "Prithvijit Chattopadhyay", "Bharat Goyal", "Boglarka Ecsedi", - "Viraj Prabhu", + "Viraj Uday Prabhu", "Judy Hoffman" ], "abstract": "Synthetic data (Sim) drawn from simulators have emerged as a popular alternative for training models where acquiring annotated real-world images is difficult. However, transferring models trained on synthetic images to real-world applications can be challenging due to appearance disparities. A commonly employed solution to counter this Sim2Real gap is unsupervised domain adaptation, where models are trained using labeled Sim data and unlabeled Real data. Mispredictions made by such Sim2Real adapted models are often associated with miscalibration \u2013 stemming from overconfident predictions on real data. In this paper, we introduce AUGCAL, a simple training-time patch for unsupervised adaptation that improves Sim2Real adapted models by \u2013 (1) reducing overall miscalibration, (2) reducing overconfidence in incorrect predictions and (3) improving confidence score reliability by better guiding misclassification detection \u2013 all while retaining or improving Sim2Real performance. Given a base Sim2Real adaptation algorithm, at training time, AUGCAL involves replacing vanilla Sim images with strongly augmented views (AUG intervention) and additionally optimizing for a training time calibration loss on augmented Sim predictions (CAL intervention). We motivate AUGCAL using a brief analytical justification of how to reduce miscalibration on unlabeled Real data. Through our experiments, we empirically show the efficacy of AUGCAL across multiple adaptation methods, backbones, tasks and shifts.", @@ -20124,7 +20124,7 @@ "Bowen Dong", "Hang Xu", "Songcen Xu", - "Rynson W Lau", + "Rynson W. H. Lau", "Wangmeng Zuo" ], "abstract": "Recent works learn 3D representation explicitly under text-3D guidance. However, limited text-3D data restricts the vocabulary scale and text control of generations. Generators may easily fall into a stereotype concept for certain text prompts, thus losing open-vocabulary generation ability. To tackle this issue, we introduce a conditional 3D generative model, namely TextField3D. Specifically, rather than using the text prompts as input directly, we suggest to inject dynamic noise into the latent space of given text prompts, i.e., Noisy Text Fields (NTFs). In this way, limited 3D data can be mapped to the appropriate range of textual latent space that is expanded by NTFs. To this end, an NTFGen module is proposed to model general text latent code in noisy fields. Meanwhile, an NTFBind module is proposed to align view-invariant image latent code to noisy fields, further supporting image-conditional 3D generation. To guide the conditional generation in both geometry and texture, multi-modal discrimination is constructed with a text-3D discriminator and a text-2.5D discriminator. Compared to previous methods, TextField3D includes three merits: 1) large vocabulary, 2) text consistency, and 3) low latency. Extensive experiments demonstrate that our method achieves a potential open-vocabulary 3D generation capability.", @@ -20143,7 +20143,7 @@ "Katherine Tian", "Eric Mitchell", "Huaxiu Yao", - "Christopher Manning", + "Christopher D Manning", "Chelsea Finn" ], "abstract": "The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines. However, language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations', which can harmfully perpetuate myths and misconceptions. Further, manual fact-checking of model responses is a time-consuming process, making human factuality labels expensive to acquire. In this work, we leverage two key recent innovations in NLP to fine-tune language models to be more factual without human labeling, targeting more open-ended generation settings than past work. First, several recent works have proposed methods for scoring the factuality of open-ended text derived from consistency with an external knowledge base or simply a large model's confidence scores. Second, the Direct Preference Optimization algorithm enables straightforward fine-tuning of language models on objectives other than supervised imitation, using a preference ranking over possible model responses. We show that learning from preference rankings generated by either automated criterion significantly improves the factuality of Llama-2 on held-out topics (percent of generated claims that are correct) compared with existing RLHF procedures or decoding strategies targeted at factuality, showing over 50% and 20-30% error reduction for biographies and medical questions respectively.", @@ -20160,12 +20160,12 @@ "title": "Modeling state-dependent communication between brain regions with switching nonlinear dynamical systems", "authors": [ "Orren Karniol-Tambour", - "David Zoltowski", + "David M. Zoltowski", "E. Mika Diamanti", "Lucas Pinto", - "Carlos Brody", - "David Tank", - "Jonathan Pillow" + "Carlos D Brody", + "David W. Tank", + "Jonathan W. Pillow" ], "abstract": "Understanding how multiple brain regions interact to produce behavior is a major challenge in systems neuroscience, with many regions causally implicated in common tasks such as sensory processing and decision making. A precise description of interactions between regions remains an open problem. Moreover, neural dynamics are nonlinear and non-stationary. Here, we propose MR-SDS, a multiregion, switching nonlinear state space model that decomposes global dynamics into local and cross-communication components in the latent space. MR-SDS includes directed interactions between brain regions, allowing for estimation of state-dependent communication signals, and accounts for sensory inputs effects, history effects, and heterogeneity across days and animals. We show that our model accurately recovers latent trajectories, vector fields underlying switching nonlinear dynamics, and cross-region communication profiles in three simulations. We then apply our method to two large-scale, multi-region neural datasets involving mouse decision making. The first includes hundreds of neurons per region, recorded simultaneously at single-cell-resolution across 3 distant cortical regions. The second is a mesoscale widefield dataset of 8 adjacent cortical regions imaged across both hemispheres. On these multi-region datasets, our model outperforms existing piece-wise linear multi-region models and reveals multiple distinct dynamical states and a rich set of cross-region communication profiles.", "type": "Poster", @@ -20200,7 +20200,7 @@ "title": "STanHop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction", "authors": [ "Dennis Wu", - "Jerry Hu", + "Jerry Yao-Chieh Hu", "Weijian Li", "Bo-Yu Chen", "Han Liu" @@ -20221,7 +20221,7 @@ "Tim De Ryck", "Florent Bonnet", "Siddhartha Mishra", - "Emmanuel de B\u00e9zenac" + "Emmanuel de Bezenac" ], "abstract": "In this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize residuals connected to partial differential equations (PDEs). Our key result is that the difficulty in training these models is closely related to the conditioning of a specific differential operator. This operator, in turn, is associated to the Hermitian square of the differential operator of the underlying PDE. If this operator is ill-conditioned, it results in slow or infeasible training. Therefore, preconditioning this operator is crucial. We employ both rigorous mathematical analysis and empirical evaluations to investigate various strategies, explaining how they better condition this critical operator, and consequently improve training.", "type": "Poster", @@ -20234,7 +20234,7 @@ }, { "id": 18472, - "title": "Latent Intuitive Physics: Learning to Transfer Hidden Physics from a 3D Video", + "title": "Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video", "authors": [ "Xiangming Zhu", "Huayu Deng", @@ -20278,7 +20278,7 @@ "Xianghong Fang", "Jian Li", "Qiang Sun", - "Wang Benyou" + "Benyou Wang" ], "abstract": "Uniformity plays a crucial role in the assessment of learned representations, contributing to a deeper comprehension of self-supervised learning. The seminal work by \\citet{Wang2020UnderstandingCR} introduced a uniformity metric that quantitatively measures the collapse degree of learned representations. Directly optimizing this metric together with alignment proves to be effective in preventing constant collapse. However, we present both theoretical and empirical evidence revealing that this metric lacks sensitivity to dimensional collapse, highlighting its limitations. To address this limitation and design a more effective uniformity metric, this paper identifies five fundamental properties, some of which the existing uniformity metric fails to meet. We subsequently introduce a novel uniformity metric that satisfies all of these desiderata and exhibits sensitivity to dimensional collapse. When applied as an auxiliary loss in various established self-supervised methods, our proposed uniformity metric consistently enhances their performance in downstream tasks.", "type": "Poster", @@ -20431,7 +20431,7 @@ "title": "Visual Data-Type Understanding does not emerge from scaling Vision-Language Models", "authors": [ "Vishaal Udandarao", - "Max F. Burg", + "Max F Burg", "Samuel Albanie", "Matthias Bethge" ], @@ -20450,7 +20450,7 @@ "id": 18459, "title": "Training-free Multi-objective Diffusion Model for 3D Molecule Generation", "authors": [ - "XU HAN", + "Xu Han", "Caihua Shan", "Yifei Shen", "Can Xu", @@ -20491,11 +20491,11 @@ "title": "Idempotent Generative Network", "authors": [ "Assaf Shocher", - "Amil Dravid", + "Amil V Dravid", "Yossi Gandelsman", "Inbar Mosseri", "Michael Rubinstein", - "Alexei Efros" + "Alexei A Efros" ], "abstract": "We propose a new approach for generative modeling based on training a neural network to be idempotent. An idempotent operator is one that can be applied sequentially without changing the result beyond the initial application, namely $f(f(z))=f(z)$. The proposed model $f$ is trained to map a source distribution (e.g, Gaussian noise) to a target distribution (e.g. realistic images) using the following objectives: (1) Instances from the target distribution should map to themselves, namely $f(x)=x$. We define the target manifold as the set of all instances that $f$ maps to themselves.(2) Instances that form the source distribution should map onto the defined target manifold. This is achieved by optimizing the idempotence term, $f(f(z))=f(z)$ which encourages the range of $f(z)$ to be on the target manifold. Under ideal assumptions such a process provably converges to the target distribution. This strategy results in a model capable of generating an output in one step, maintaining a consistent latent space, while also allowing sequential applications for refinement. Additionally, we find that by processing inputs from both target and source distributions, the model adeptly projects corrupted or modified data back to the target manifold. This work is a first step towards a ``global projector'' that enables projecting any input into a target data distribution.", "type": "Poster", @@ -20534,7 +20534,7 @@ "authors": [ "Alexander Robey", "Fabian Latorre", - "George Pappas", + "George J. Pappas", "Hamed Hassani", "Volkan Cevher" ], @@ -20570,8 +20570,8 @@ "title": "Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers", "authors": [ "Awni Altabaa", - "Taylor Webb", - "Jonathan Cohen", + "Taylor Whittington Webb", + "Jonathan D. Cohen", "John Lafferty" ], "abstract": "An extension of Transformers is proposed that enables explicit relational reasoning through a novel module called the *Abstractor*. At the core of the Abstractor is a variant of attention called *relational cross-attention*. The approach is motivated by an architectural inductive bias for relational learning that disentangles relational information from extraneous features about individual objects. This enables explicit relational reasoning, supporting abstraction and generalization from limited data. The Abstractor is first evaluated on simple discriminative relational tasks and compared to existing relational architectures. Next, the Abstractor is evaluated on purely relational sequence-to-sequence tasks, where dramatic improvements are seen in sample efficiency compared to standard Transformers. Finally, Abstractors are evaluated on a collection of tasks based on mathematical problem solving, where modest but consistent improvements in performance and sample efficiency are observed.", @@ -20609,9 +20609,9 @@ "authors": [ "Simone Magistri", "Tomaso Trinci", - "Albin Soutif--Cormerais", + "Albin Soutif", "Joost van de Weijer", - "Andrew Bagdanov" + "Andrew D. Bagdanov" ], "abstract": "Exemplar-Free Class Incremental Learning (EFCIL) aims to learn from a sequence of tasks without having access to previous task data. In this paper, we consider the challenging Cold Start scenario in which insufficient data is available in the first task to learn a high-quality backbone. This is especially challenging for EFCIL since it requires high plasticity, which results in feature drift which is difficult to compensate for in the exemplar-free setting. To address this problem, we propose a simple and effective approach that consolidates feature representations by regularizing drift in directions highly relevant to previous tasks and employs prototypes to reduce task-recency bias. Our method, called Elastic Feature Consolidation (EFC), exploits a tractable second-order approximation of feature drift based on an Empirical Feature Matrix (EFM). The EFM induces a pseudo-metric in feature space which we use to regularize feature drift in important directions and to update Gaussian prototypes used in a novel asymmetric cross entropy loss which effectively balances prototype rehearsal with data from new tasks. Experimental results on CIFAR-100, Tiny-ImageNet, ImageNet-Subset and ImageNet-1K demonstrate that Elastic Feature Consolidation is better able to learn new tasks by maintaining model plasticity and significantly outperform the state-of-the-art.", "type": "Poster", @@ -20652,7 +20652,7 @@ "Di Wu", "Zhiyuan Chen", "Jiangbin Zheng", - "Stan Z Li" + "Stan Z. Li" ], "abstract": "By contextualizing the kernel as globally as possible, Modern ConvNets have shown great potential in computer vision tasks. However, recent progress of \\textit{multi-order game-theoretic interaction} in deep neural networks (DNNs) shows that the representation capacity of modern ConvNets has not been well unleashed, where the most expressive interactions have not been effectively encoded with the increased kernel size. To address this challenge, we propose a new family of modern ConvNets, dubbed MogaNet, for discriminative visual representation learning in pure ConvNet-based models, with preferable complexity-performance trade-offs. MogaNet encapsulates conceptually simple yet effective convolutions and gated aggregation into a compact module, where discriminative features are efficiently gathered and contextualized in an adaptive manner. Extensive experiments show that MogaNet exhibits great scalability, impressive efficiency of model parameters, and competitive performance compared to state-of-the-art ViTs and ConvNets on ImageNet and various downstream vision benchmarks, including COCO object detection, ADE20K semantic segmentation, 2D\\&3D human pose estimation, and video prediction. Notably, MogaNet hits 80.0\\% and 87.8\\% accuracy with 5.2M and 181M parameters on ImageNet-1K, outperforming ParC-Net and ConvNeXt-L, while saving 59\\% FLOPs and 17M parameters, respectively.", "type": "Poster", @@ -20671,7 +20671,7 @@ "authors": [ "Chongyi Zheng", "Benjamin Eysenbach", - "Homer Walke", + "Homer Rich Walke", "Patrick Yin", "Kuan Fang", "Ruslan Salakhutdinov", @@ -20710,8 +20710,8 @@ "Xianjun Yang", "Wei Cheng", "Yue Wu", - "Linda Petzold", - "William Wang", + "Linda Ruth Petzold", + "William Yang Wang", "Haifeng Chen" ], "abstract": "Large language models (LLMs) have notably enhanced the fluency and diversity of machine-generated text. However, this progress also presents a significant challenge in detecting the origin of a given text, and current research on detection methods lags behind the rapid evolution of LLMs. Conventional training-based methods have limitations in flexibility, particularly when adapting to new domains, and they often lack explanatory power. To address this gap, we propose a novel training-free detection strategy called Divergent N-Gram Analysis (DNA-GPT). Given a text, we first truncate it in the middle and then use only the preceding portion as input to the LLMs to regenerate the new remaining parts. By analyzing the differences between the original and new remaining parts through N-gram analysis in black-box or probability divergence in white-box, we can clearly illustrate significant discrepancies between machine-generated and human-written text. We conducted extensive experiments on the most advanced LLMs from OpenAI, including text-davinci-003, GPT-3.5-turbo, and GPT-4, as well as open-source models such as GPT-NeoX-20B and LLaMa-13B. Results show that our zero-shot approach exhibits state-of-the-art performance in distinguishing between human and GPT-generated text on four English and one German dataset, outperforming OpenAI's own classifier, which is trained on millions of text. Additionally, our methods provide reasonable explanations and evidence to support our claim, which is a unique feature of explainable detection. Our method is also robust under the revised text attack and can additionally solve model sourcing.", @@ -20725,13 +20725,13 @@ }, { "id": 18443, - "title": "Adding 3D Geometry Control to Diffusion Models", + "title": "Generating Images with 3D Annotations Using Diffusion Models", "authors": [ "Wufei Ma", "Qihao Liu", "Jiahao Wang", - "Xiaoding Yuan", "Angtian Wang", + "Xiaoding Yuan", "Yi Zhang", "Zihao Xiao", "Guofeng Zhang", @@ -20758,8 +20758,8 @@ "Kaifeng Lyu", "Jikai Jin", "Zhiyuan Li", - "Simon Du", - "Jason Lee", + "Simon Shaolei Du", + "Jason D. Lee", "Wei Hu" ], "abstract": "Recent work by Power et al. (2022) highlighted a surprising \"grokking\" phenomenon in learning arithmetic tasks: a neural net first \"memorizes\" the training set, resulting in perfect training accuracy but near-random test accuracy, and after training for sufficiently longer, it suddenly transitions to perfect test accuracy. This paper studies the grokking phenomenon in theoretical setups and shows that it can be induced by a dichotomy of early and late phase implicit biases. Specifically, when training homogeneous neural nets with large initialization and small weight decay on both classification and regression tasks, we prove that the training process gets trapped at a solution corresponding to a kernel predictor for a long time, and then a very sharp transition to min-norm/max-margin predictors occurs, leading to a dramatic change in test accuracy. Even in the absence of weight decay, we show that grokking can still happen when the late phase implicit bias is driven by other regularization mechanisms, such as implicit margin maximization or sharpness reduction.", @@ -20811,7 +20811,7 @@ "id": 18436, "title": "DecompOpt: Controllable and Decomposed Diffusion Models for Structure-based Molecular Optimization", "authors": [ - "Nick Zhou", + "Xiangxin Zhou", "Xiwei Cheng", "Yuwei Yang", "Yu Bao", @@ -20834,7 +20834,7 @@ "Oscar Sainz", "Iker Garc\u00eda-Ferrero", "Rodrigo Agerri", - "Oier Lacalle", + "Oier Lopez de Lacalle", "German Rigau", "Eneko Agirre" ], @@ -20926,9 +20926,9 @@ "Juyeon Heo", "Songyou Peng", "Yandong Wen", - "Michael J Black", + "Michael J. Black", "Adrian Weller", - "Bernhard Schoelkopf" + "Bernhard Sch\u00f6lkopf" ], "abstract": "Large foundation models are becoming ubiquitous, but training them from scratch is prohibitively expensive. Thus, efficiently adapting these powerful models to downstream tasks is increasingly important. In this paper, we study a principled finetuning paradigm -- Orthogonal Finetuning (OFT) -- for downstream task adaptation. Despite demonstrating good generalizability, OFT still uses a fairly large number of trainable parameters due to the high dimensionality of orthogonal matrices. To address this, we start by examining OFT from an information transmission perspective, and then identify a few key desiderata that enable better parameter-efficiency. Inspired by how the Cooley-Tukey fast Fourier transform algorithm enables efficient information transmission, we propose an efficient orthogonal parameterization using butterfly structures. We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT). By subsuming OFT as a special case, BOFT introduces a generalized orthogonal finetuning framework. Finally, we conduct an extensive empirical study of adapting large vision transformers, large language models, and text-to-image diffusion models to various downstream tasks in computer vision and natural language. The results validate the effectiveness of BOFT as a generic finetuning method.", "type": "Poster", @@ -20958,9 +20958,9 @@ }, { "id": 18973, - "title": "Domain-Inspired Sharpness Aware Minimization Under Domain Shifts", + "title": "Domain-Inspired Sharpness-Aware Minimization Under Domain Shifts", "authors": [ - "ruipeng zhang", + "Ruipeng Zhang", "Ziqing Fan", "Jiangchao Yao", "Ya Zhang", @@ -21057,9 +21057,9 @@ "Matteo Vinao-Carl", "Nir Grossman", "Michael David", - "Emma-Jane Mallas", - "David Sharp", - "Paresh Malhotra", + "Emma Mallas", + "David J. Sharp", + "Paresh A. Malhotra", "Pierre Vandergheynst", "Adam Gosztolai" ], @@ -21118,7 +21118,7 @@ "authors": [ "Victor Geadah", "International Brain Laboratory", - "Jonathan Pillow" + "Jonathan W. Pillow" ], "abstract": "Unsupervised methods for dimensionality reduction of neural activity and behavior have provided unprecedented insights into the underpinnings of neural information processing. One popular approach involves the recurrent switching linear dynamical system (rSLDS) model, which describes the latent dynamics of neural spike train data using discrete switches between a finite number of low-dimensional linear dynamical systems. However, a few properties of rSLDS model limit its deployability on trial-varying data, such as a fixed number of states over trials, and no latent structure or organization of states. Here we overcome these limitations by endowing the rSLDS model with a semi-Markov discrete state process, with latent geometry, that captures key properties of stochastic processes over partitions with flexible state cardinality. We leverage partial differential equations (PDE) theory to derive an efficient, semi-parametric formulation for dynamical sufficient statistics to the discrete states. This process, combined with switching dynamics, defines our infinite recurrent switching linear dynamical system (irSLDS) model class. We first validate and demonstrate the capabilities of our model on synthetic data. Next, we turn to the analysis of mice electrophysiological data during decision-making, and uncover strong non-stationary processes underlying both within-trial and trial-averaged neural activity.", "type": "Poster", @@ -21173,8 +21173,8 @@ "id": 18424, "title": "Conditional Variational Diffusion Models", "authors": [ - "Gabriel della Maggiora", - "Luis A. Croquevielle", + "Gabriel Della Maggiora", + "Luis Alberto Croquevielle", "Nikita Deshpande", "Harry Horsley", "Thomas Heinis", @@ -21286,7 +21286,7 @@ "authors": [ "Rohan Sharma", "Kaiyi Ji", - "Zhiqiang Xu", + "zhiqiang xu", "Changyou Chen" ], "abstract": "Self-supervised learning through contrastive representations is an emergent and promising avenue, aiming at alleviating the availability of labeled data. Recent research in the field also demonstrates its viability for several downstream tasks, henceforth leading to works that implement the contrastive principle through innovative loss functions and methods. However, despite achieving impressive progress, most methods depend on prohibitively large batch sizes and compute requirements for good performance. In this work, we propose the $\\textbf{AUC}$-$\\textbf{C}$ontrastive $\\textbf{L}$earning, a new approach to contrastive learning that demonstrates robust and competitive performance in compute-limited regimes. We propose to incorporate the contrastive objective within the AUC-maximization framework, by noting that the AUC metric is maximized upon enhancing the probability of the network's binary prediction difference between positive and negative samples which inspires adequate embedding space arrangements in representation learning. Unlike standard contrastive methods, when performing stochastic optimization, our method maintains unbiased stochastic gradients and thus is more robust to batchsizes as opposed to standard stochastic optimization problems.Remarkably, our method with a batch size of 256, outperforms several state-of-the-art methods that may need much larger batch sizes (e.g., 4096), on ImageNet and other standard datasets. Experiments on transfer learning, few-shot learning, and other downstream tasks also demonstrate the viability of our method.", @@ -21300,7 +21300,7 @@ }, { "id": 18414, - "title": "Emerging Pixel-level Semantic Knowledge in Diffusion Models", + "title": "EmerDiff: Emerging Pixel-level Semantic Knowledge in Diffusion Models", "authors": [ "Koichi Namekata", "Amirmojtaba Sabour", @@ -21346,7 +21346,7 @@ "title": "An Agnostic View on the Cost of Overfitting in (Kernel) Ridge Regression", "authors": [ "Lijia Zhou", - "James Simon", + "James B Simon", "Gal Vardi", "Nathan Srebro" ], @@ -21426,7 +21426,7 @@ "authors": [ "Hanmin Li", "Avetik Karagulyan", - "Peter Richtarik" + "Peter Richt\u00e1rik" ], "abstract": "This paper introduces a new method for minimizing matrix-smooth non-convex objectives through the use of novel Compressed Gradient Descent (CGD) algorithms enhanced with a matrix-valued stepsize. The proposed algorithms are theoretically analyzed first in the single-node and subsequently in the distributed settings. Our theoretical results reveal that the matrix stepsize in CGD can capture the objective\u2019s structure and lead to faster convergence compared to a scalar stepsize. As a byproduct of our general results, we emphasize the importance of selecting the compression mechanism and the matrix stepsize in a layer-wise manner, taking advantage of model structure. Moreover, we provide theoretical guarantees for free compression, by designing specific layer-wise compressors for the non-convex matrix smooth objectives. Our findings are supported with empirical evidence.", "type": "Poster", @@ -21442,7 +21442,7 @@ "title": "Skill or Luck? Return Decomposition via Advantage Functions", "authors": [ "Hsiao-Ru Pan", - "Bernhard Schoelkopf" + "Bernhard Sch\u00f6lkopf" ], "abstract": "Learning from off-policy data is essential for sample-efficient reinforcement learning. In the present work, we build on the insight that the advantage function can be understood as the causal effect of an action on the return, and show that this allows us to decompose the return of a trajectory into parts caused by the agent\u2019s actions (skill) and parts outside of the agent\u2019s control (luck). Furthermore, this decomposition enables us to naturally extend Direct Advantage Estimation (DAE) to off-policy settings (Off-policy DAE). The resulting method can learnfrom off-policy trajectories without relying on importance sampling techniques or truncating off-policy actions. We draw connections between Off-policy DAE and previous methods to demonstrate how it can speed up learning and when the proposed off-policy corrections are important. Finally, we use the MinAtar environments to illustrate how ignoring off-policy corrections can lead to suboptimal policy optimization performance.", "type": "Poster", @@ -21455,23 +21455,23 @@ }, { "id": 19293, - "title": "Maximally discriminative stimuli for functional cell type identification", + "title": "Most discriminative stimuli for functional cell type clustering", "authors": [ - "Max F. Burg", + "Max F Burg", "Thomas Zenkel", "Michaela Vystr\u010dilov\u00e1", "Jonathan Oesterle", "Larissa H\u00f6fling", - "Konstantin F. Willeke", + "Konstantin Friedrich Willeke", "Jan Lause", "Sarah M\u00fcller", - "Paul Fahey", + "Paul G. Fahey", "Zhiwei Ding", "Kelli Restivo", "Shashwat Sridhar", "Tim Gollisch", "Philipp Berens", - "Andreas Tolias", + "Andreas S. Tolias", "Thomas Euler", "Matthias Bethge", "Alexander S Ecker" @@ -21512,9 +21512,9 @@ "id": 18404, "title": "Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning", "authors": [ - "Linhao Luo", + "LINHAO LUO", "Yuan-Fang Li", - "Reza Haffari", + "Reza Haf", "Shirui Pan" ], "abstract": "Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks. However, they lack up-to-date knowledge and experience hallucinations during reasoning, which can lead to incorrect reasoning processes and diminish their performance and trustworthiness. Knowledge graphs (KGs), which capture vast amounts of facts in a structured format, offer a reliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM reasoning methods only treat KGs as factual knowledge bases and overlook the importance of their structural information for reasoning. In this paper, we propose a novel method called reasoning on graphs (RoG) that synergizes LLMs with KGs to enable faithful and interpretable reasoning. Specifically, we present a planning-retrieval-reasoning framework, where RoG first generates relation paths grounded by KGs as faithful plans. These plans are then used to retrieve valid reasoning paths from the KGs for LLMs to conduct faithful reasoning. Furthermore, RoG not only distills knowledge from KGs to improve the reasoning ability of LLMs through training but also allows seamless integration with any arbitrary LLMs during inference. Extensive experiments on two benchmark KGQA datasets demonstrate that RoG achieves state-of-the-art performance on KG reasoning tasks and generates faithful and interpretable reasoning results.", @@ -21544,7 +21544,7 @@ }, { "id": 18402, - "title": "A Stochastic Centering Framework for Improving Calibration in Graph Neural Networks", + "title": "Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks", "authors": [ "Puja Trivedi", "Mark Heimann", @@ -21566,7 +21566,7 @@ "title": "Learning semilinear neural operators: A unified recursive framework for prediction and data assimilation.", "authors": [ "Ashutosh Singh", - "Ricardo Borsoi", + "Ricardo Augusto Borsoi", "Deniz Erdogmus", "Tales Imbiriba" ], @@ -21603,7 +21603,7 @@ "Yongchan Kwon", "Eric Wu", "Kevin Wu", - "James Y Zou" + "James Zou" ], "abstract": "Quantifying the impact of training data points is crucial for understanding the outputs of machine learning models and for improving the transparency of the AI pipeline. The influence function is a principled and popular data attribution method, but its computational cost often makes it challenging to use. This issue becomes more pronounced in the setting of large language models and text-to-image models. In this work, we propose DataInf, an efficient influence approximation method that is practical for large-scale generative AI models. Leveraging an easy-to-compute closed-form expression, DataInf outperforms existing influence computation algorithms in terms of computational and memory efficiency. Our theoretical analysis shows that DataInf is particularly well-suited for parameter-efficient fine-tuning techniques such as LoRA. Through systematic empirical evaluations, we show that DataInf accurately approximates influence scores and is orders of magnitude faster than existing methods. In applications to RoBERTa-large, Llama-2-13B-chat, and stable-diffusion-v1.5 models, DataInf effectively identifies the most influential fine-tuning examples better than other approximate influence scores. Moreover, it can help to identify which data points are mislabeled.", "type": "Poster", @@ -21616,10 +21616,10 @@ }, { "id": 18399, - "title": "Efficient Distributed Training with Full Communication-Computation Overlap", + "title": "CO2: Efficient Distributed Training with Full Communication-Computation Overlap", "authors": [ "Weigao Sun", - "Qin Zhen", + "Zhen Qin", "Weixuan Sun", "Shidi Li", "Dong Li", @@ -21658,7 +21658,7 @@ "id": 18394, "title": "DMBP: Diffusion model-based predictor for robust offline reinforcement learning against state observation perturbations", "authors": [ - "Zhihe Yang", + "Zhihe YANG", "Yunjian Xu" ], "abstract": "Offline reinforcement learning (RL), which aims to fully explore offline datasets for training without interaction with environments, has attracted growing recent attention. A major challenge for the real-world application of offline RL stems from the robustness against state observation perturbations, e.g., as a result of sensor errors or adversarial attacks. Unlike online robust RL, agents cannot be adversarially trained in the offline setting. In this work, we propose Diffusion Model-Based Predictor (DMBP) in a new framework that recovers the actual states with conditional diffusion models for state-based RL tasks. To mitigate the error accumulation issue in model-based estimation resulting from the classical training of conventional diffusion models, we propose a non-Markovian training objective to minimize the sum entropy of denoised states in RL trajectory. Experiments on standard benchmark problems demonstrate that DMBP can significantly enhance the robustness of existing offline RL algorithms against different scales of ran- dom noises and adversarial attacks on state observations. Further, the proposed framework can effectively deal with incomplete state observations with random combinations of multiple unobserved dimensions in the test. Our implementation is available at https://github.com/zhyang2226/DMBP.", @@ -21716,7 +21716,7 @@ "title": "BENO: Boundary-embedded Neural Operators for Elliptic PDEs", "authors": [ "Haixin Wang", - "Jiaxin Li", + "Jiaxin LI", "Anubhav Dwivedi", "Kentaro Hara", "Tailin Wu" @@ -21755,8 +21755,8 @@ "title": "Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets", "authors": [ "Dominique Beaini", - "Shenyang(Andy) Huang", - "Joao Cunha", + "Shenyang Huang", + "Joao Alex Cunha", "Zhiyi Li", "Gabriela Moisescu-Pareja", "Oleksandr Dymov", @@ -21779,12 +21779,12 @@ "Reihaneh Rabbany", "Jian Tang", "Christopher Morris", - "Mirco Ravanellu", + "Mirco Ravanelli", "Guy Wolf", "Prudencio Tossou", "Hadrien Mary", "Therence Bois", - "Andrew Fitzgibbon", + "Andrew William Fitzgibbon", "Blazej Banaszewski", "Chad Martin", "Dominic Masters" @@ -21848,7 +21848,7 @@ "Samuel Lang", "Alexandra Haslund-Gourley", "Eviatar Yemini", - "Steven Zucker" + "Steven W. Zucker" ], "abstract": "The static synaptic connectivity of neuronal circuits stands in direct contrast to the dynamics of their function. As in changing community interactions, different neurons can participate actively in various combinations to effect behaviors at different times. We introduce an unsupervised approach to learn the dynamic affinities between neurons in live, behaving animals, and to reveal which communities form among neurons at different times. The inference occurs in two major steps. First, pairwise non-linear affinities between neuronal traces from brain-wide calcium activity are organized by non-negative tensor factorization (NTF). Each factor specifies which groups of neurons are most likely interacting for an inferred interval in time, and for which animals. Finally, a generative model that allows for weighted community detection is applied to the functional motifs produced by NTF to reveal a dynamic functional connectome. Since time codes the different experimental variables (e.g., application of chemical stimuli), this provides an atlas of neural motifs active during separate stages of an experiment (e.g., stimulus application or spontaneous behaviors). Results from our analysis are experimentally validated, confirming that our method is able to robustly predict causal interactions between neurons to generate behavior.", "type": "Poster", @@ -21887,7 +21887,7 @@ "Victor Akinwande", "Yiding Jiang", "Dylan Sam", - "Zico Kolter" + "J Zico Kolter" ], "abstract": "Zero-shot learning in prompted vision-language models, the practice of crafting prompts to build classifiers without an explicit training process, has achieved impressive performance in many settings. This success presents a seemingly surprising observation: these methods suffer relatively little from overfitting, i.e., when a prompt is manually engineered to achieve low error on a given training set (thus rendering the method no longer actually zero-shot), the approach still performs well on held-out test data. In this paper, we show that we can explain such performance well via recourse to classical PAC-Bayes bounds. Specifically, we show that the discrete nature of prompts, combined with a PAC-Bayes prior given by a language model, results in generalization bounds that are remarkably tight by the standards of the literature: for instance, the generalization bound of an ImageNet classifier is often within a few percentage points of the true test error. We demonstrate empirically that this holds for existing handcrafted prompts and prompts generated through simple greedy search. Furthermore, the resulting bound is well-suited for model selection: the models with the best bound typically also have the best test performance. This work thus provides a possible justification for the widespread practice of \"prompt engineering,\" even if it seems that such methods could potentially overfit the training data.", "type": "Poster", @@ -21904,7 +21904,7 @@ "authors": [ "Daniil Kirilenko", "Vitaliy Vorobyov", - "Aleksey Kovalev", + "Alexey Kovalev", "Aleksandr Panov" ], "abstract": "Object-centric architectures usually apply a differentiable module to the entire feature map to decompose it into sets of entity representations called slots. Some of these methods structurally resemble clustering algorithms, where the cluster's center in latent space serves as a slot representation. Slot Attention is an example of such a method, acting as a learnable analog of the soft k-means algorithm. Our work employs a learnable clustering method based on the Gaussian Mixture Model. Unlike other approaches, we represent slots not only as centers of clusters but also incorporate information about the distance between clusters and assigned vectors, leading to more expressive slot representations. Our experiments demonstrate that using this approach instead of Slot Attention improves performance in object-centric scenarios, achieving state-of-the-art results in the set property prediction task.", @@ -21926,7 +21926,7 @@ "Ho-fung Leung", "Farzan Farnia", "Wen Sun", - "Jason Lee" + "Jason D. Lee" ], "abstract": "We study risk-sensitive Reinforcement Learning (RL), where we aim to maximizethe Conditional Value at Risk (CVaR) with a fixed risk tolerance $\\tau$. Prior theoretical work studying risk-sensitive RL focuses on the tabular Markov Decision Processes (MDPs) setting. To extend CVaR RL to settings where state space is large, function approximation must be deployed. We study CVaR RL in low-rank MDPs with nonlinear function approximation. Low-rank MDPs assume the underlying transition kernel admits a low-rank decomposition, but unlike prior linear models, low-rank MDPs do not assume the feature or state-action representation is known. We propose a novel Upper Confidence Bound (UCB) bonus-driven algorithm to carefully balance the interplay between exploration, exploitation, and representation learning in CVaR RL. We prove that our algorithm achieves a sample complexity of $\\tilde{O}\\left(\\frac{H^7 A^2 d^4}{\\tau^2 \\epsilon^2}\\right)$ to yield an $\\epsilon$-optimal CVaR, where $H$ is the length of each episode, $A$ is the capacity of action space, and $d$ is the dimension of representations.Computational-wise, we design a novel discretized Least-Squares Value Iteration (LSVI) algorithm for the CVaR objective as the planning oracle and show that we can find the near-optimal policy in a polynomial running time with a Maximum Likelihood Estimation oracle. To our knowledge, this is the first provably efficient CVaR RL algorithm in low-rank MDPs.", "type": "Poster", @@ -21941,7 +21941,7 @@ "id": 18373, "title": "Interpretable Sparse System Identification: Beyond Recent Deep Learning Techniques on Time-Series Prediction", "authors": [ - "Liu Xiaoyi", + "Xiaoyi Liu", "Duxin Chen", "Wenjia Wei", "Xia Zhu", @@ -21963,8 +21963,8 @@ "Rabia Gondur", "Usama Bin Sikandar", "Evan Schaffer", - "Mikio Aoi", - "Stephen Keeley" + "Mikio Christian Aoi", + "Stephen L Keeley" ], "abstract": "Characterizing the relationship between neural population activity and behavioral data is a central goal of neuroscience. While latent variable models (LVMs) are successful in describing high-dimensional data, they are typically only designed for a single type of data, making it difficult to identify structure shared across different experimental data modalities. Here, we address this shortcoming by proposing an unsupervised LVM which extracts shared and independent latents for distinct, simultaneously recorded experimental modalities. We do this by combining Gaussian Process Factor Analysis (GPFA), an interpretable LVM for neural spiking data with temporally smooth latent space, with Gaussian Process Variational Autoencoders (GP-VAEs), which similarly use a GP prior to characterize correlations in a latent space, but admit rich expressivity due to a deep neural network mapping to observations. We achieve interpretability in our model by partitioning latent variability into components that are either shared between or independent to each modality. We parameterize the latents of our model in the Fourier domain, and show improved latent identification using this approach over standard GP-VAE methods. We validate our model on simulated multi-modal data consisting of Poisson spike counts and MNIST images that scale and rotate smoothly over time. We show that the multi-modal GP-VAE (MM-GPVAE) is able to not only identify the shared and independent latent structure across modalities accurately, but provides good reconstructions of both images and neural rates on held-out trials. Finally, we demonstrate our framework on two real world multi-modal experimental settings: Drosophila whole-brain calcium imaging alongside tracked limb positions, and Manduca sexta spike train measurements from ten wing muscles as the animal tracks a visual stimulus.", "type": "Poster", @@ -21982,9 +21982,9 @@ "Samyak Jain", "Robert Kirk", "Ekdeep Singh Lubana", - "Robert Dick", + "Robert P. Dick", "Hidenori Tanaka", - "Tim Rocktaeschel", + "Tim Rockt\u00e4schel", "Edward Grefenstette", "David Krueger" ], @@ -22044,7 +22044,7 @@ "Kaixuan Ji", "Qingyue Zhao", "Jiafan He", - "Weitong ZHANG", + "Weitong Zhang", "Quanquan Gu" ], "abstract": "Recent studies have shown that the regret of reinforcement learning (RL) can be polylogarithmic in the planning horizon $H$. However, it remains an open question whether such a result holds for adversarial RL. In this paper, we answer this question affirmatively by proposing the first horizon-free policy search algorithm. To tackle the challenges caused by exploration and adversarially chosen reward over episodes, our algorithm employs (1) a variance-uncertainty-aware weighted least square estimator for the transition kernel; and (2) an occupancy measure-based technique for the online search of a stochastic policy. We show that our algorithm achieves an $\\tilde{O}\\big((d+\\log |\\mathcal{S}|)\\sqrt{K} + d^2\\big)$ regret with full-information feedback, where $d$ is the dimension of a known feature mapping linearly parametrizing the unknown transition kernel of the MDP, $K$ is the number of episodes, $|\\mathcal{S}|$ is the cardinality of the state space. We also provide hardness results to justify the near optimality of our algorithm and the inevitability of $\\log|\\mathcal{S}|$ in the regret bound.", @@ -22060,7 +22060,7 @@ "id": 18367, "title": "Symbol as Points: Panoptic Symbol Spotting via Point-based Representation", "authors": [ - "Wenlong Liu", + "WENLONG LIU", "Tianyu Yang", "Yuhan Wang", "Qizhi Yu", @@ -22101,7 +22101,7 @@ "title": "Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making", "authors": [ "Jeonghye Kim", - "Su Young Lee", + "Suyoung Lee", "Woojun Kim", "Youngchul Sung" ], @@ -22119,7 +22119,7 @@ "title": "Circumventing Concept Erasure Methods For Text-To-Image Generative Models", "authors": [ "Minh Pham", - "Kelly Marshall", + "Kelly O. Marshall", "Niv Cohen", "Govind Mittal", "Chinmay Hegde" @@ -22156,9 +22156,9 @@ "id": 19272, "title": "Manipulating dropout reveals an optimal balance of efficiency and robustness in biological and machine visual systems", "authors": [ - "Jacob Prince", + "Jacob S. Prince", "Gabriel Fajardo", - "George Alvarez", + "George A. Alvarez", "Talia Konkle" ], "abstract": "According to the efficient coding hypothesis, neural populations encode information optimally when representations are high-dimensional and uncorrelated. However, such codes may carry a cost in terms of generalization and robustness. Past empirical studies of early visual cortex (V1) in rodents have suggested that this tradeoff indeed constrains sensory representations. However, it remains unclear whether these insights generalize across the hierarchy of the human visual system, and particularly to object representations in high-level occipitotemporal cortex (OTC). To gain new empirical clarity, here we develop a family of object recognition models with parametrically varying dropout proportion $p$, which induces systematically varying dimensionality of internal responses (while controlling all other inductive biases). We find that increasing dropout produces an increasingly smooth, low-dimensional representational space. Optimal robustness to lesioning is observed at around 70% dropout, after which both accuracy and robustness decline. Representational comparison to large-scale 7T fMRI data from occipitotemporal cortex in the Natural Scenes Dataset reveals that this optimal degree of dropout is also associated with maximal emergent neural predictivity. Finally, using new techniques for achieving denoised estimates of the eigenspectrum of human fMRI responses, we compare the rate of eigenspectrum decay between model and brain feature spaces. We observe that the match between model and brain representations is associated with a common balance between efficiency and robustness in the representational space. These results suggest that varying dropout may reveal an optimal point of balance between the efficiency of high-dimensional codes and the robustness of low dimensional codes in hierarchical vision systems.", @@ -22275,7 +22275,7 @@ "id": 18353, "title": "Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation", "authors": [ - "Qiang HE", + "Qiang He", "Tianyi Zhou", "Meng Fang", "Setareh Maghsudi" @@ -22330,7 +22330,7 @@ "title": "Expressivity of ReLU-Networks under Convex Relaxations", "authors": [ "Maximilian Baader", - "Mark N M\u00fcller", + "Mark Niklas Mueller", "Yuhao Mao", "Martin Vechev" ], @@ -22367,10 +22367,10 @@ }, { "id": 18347, - "title": "Meta-Learning Priors Using Unrolled Proximal Neural Networks", + "title": "Meta-Learning Priors Using Unrolled Proximal Networks", "authors": [ "Yilang Zhang", - "Georgios B Giannakis" + "Georgios B. Giannakis" ], "abstract": "Relying on prior knowledge accumulated from related tasks, meta-learning offers a powerful approach to learning a novel task from a limited number of training data. Recent approaches use a family of prior probability density functions or recurrent neural network models, whose parameters can be optimized by utilizing labeled data from the observed tasks. While these approaches have appealing empirical performance, expressiveness of their prior is relatively low, which limits generalization and interpretation of meta-learning. Aiming at expressive yet meaningful priors, this contribution puts forth a novel prior representation model that leverages the notion of algorithm unrolling. The key idea is to unroll the proximal gradient descent steps, where learnable piecewise linear functions are developed to approximate the desired proximal operators within *tight* theoretical error bounds established for both smooth and non-smooth proximal functions. The resultant multi-block neural network not only broadens the scope of learnable priors, but also enhances interpretability from an optimization viewpoint. Numerical tests conducted on few-shot learning datasets demonstrate markedly improved performance with flexible, visualizable, and understandable priors.", "type": "Poster", @@ -22450,7 +22450,7 @@ "id": 18340, "title": "Understanding In-Context Learning from Repetitions", "authors": [ - "Jianhao (Elliott) Yan", + "Jianhao Yan", "Jin Xu", "Chiyu Song", "Chenming Wu", @@ -22473,7 +22473,7 @@ "Marco Federici", "Patrick Forr\u00e9", "Ryota Tomioka", - "Bastiaan Veeling" + "Bastiaan S. Veeling" ], "abstract": "Markov processes are widely used mathematical models for describing dynamic systems in various fields. However, accurately simulating large-scale systems at long time scales is computationally expensive due to the short time steps required for accurate integration. In this paper, we introduce an inference process that maps complex systems into a simplified representational space and models large jumps in time. To achieve this, we propose Time-lagged Information Bottleneck (T-IB), a principled objective rooted in information theory, which aims to capture relevant temporal features while discarding high-frequency information to simplify the simulation task and minimize the inference error. Our experiments demonstrate that T-IB learns information-optimal representations for accurately modeling the statistical properties and dynamics of the original process at a selected time lag, outperforming existing time-lagged dimensionality reduction methods.", "type": "Poster", @@ -22543,7 +22543,7 @@ }, { "id": 18336, - "title": "UC-NERF: Neural Radiance Field for under-calibrated multi-view cameras", + "title": "UC-NERF: Neural Radiance Field for Under-Calibrated Multi-View Cameras in Autonomous Driving", "authors": [ "Kai Cheng", "Xiaoxiao Long", @@ -22673,7 +22673,7 @@ "Jiankun Wang", "Qianru Sun", "Lei Ji", - "Eric Chang", + "Eric I-Chao Chang", "Hanwang Zhang" ], "abstract": "Representation learning is all about discovering the hidden modular attributes that generate the data faithfully. We explore the potential of Denoising Diffusion Probabilistic Model (DM) in unsupervised learning of the modular attributes. We build a theoretical framework that connects the diffusion time-steps and the hidden attributes, which serves as an effective inductive bias for unsupervised learning. Specifically, the forward diffusion process incrementally adds Gaussian noise to samples at each time-step, which essentially collapses different samples into similar ones by losing attributes, e.g., fine-grained attributes such as texture are lost with less noise added (i.e., early time-steps), while coarse-grained ones such as shape are lost by adding more noise (i.e., late time-steps). To disentangle the modular attributes, at each time-step t, we learn a t-specific feature to compensate for the newly lost attribute, and the set of all {1,...,t}-specific features, corresponding to the cumulative set of lost attributes, are trained to make up for the reconstruction error of a pre-trained DM at time-step t. On CelebA, FFHQ, and Bedroom datasets, the learned feature significantly improves attribute classification and enables faithful counterfactual generation, e.g., interpolating only one specified attribute between two images, validating the disentanglement quality. Codes are in Appendix.", @@ -22711,7 +22711,7 @@ "authors": [ "Pierre Marion", "Yu-Han Wu", - "Michael Sander", + "Michael Eli Sander", "G\u00e9rard Biau" ], "abstract": "Residual neural networks are state-of-the-art deep learning models. Their continuous-depth analog, neural ordinary differential equations (ODEs), are also widely used. Despite their success, the link between the discrete and continuous models still lacks a solid mathematical foundation. In this article, we take a step in this direction by establishing an implicit regularization of deep residual networks towards neural ODEs, for nonlinear networks trained with gradient flow. We prove that if the network is initialized as a discretization of a neural ODE, then such a discretization holds throughout training. Our results are valid for a finite training time, and also as the training time tends to infinity provided that the network satisfies a Polyak-\u0141ojasiewicz condition. Importantly, this condition holds for a family of residual networks where the residuals are two-layer perceptrons with an overparameterization in width that is only linear, and implies the convergence of gradient flow to a global minimum. Numerical experiments illustrate our results.", @@ -22747,9 +22747,9 @@ "Arpita Chowdhury", "Xinqi Xiong", "Feng-Ju Chang", - "David Carlyn", + "David Edward Carlyn", "Samuel Stevens", - "Kaiya Provost", + "Kaiya L Provost", "Anuj Karpatne", "Bryan Carstens", "Daniel Rubenstein", @@ -22774,7 +22774,7 @@ "title": "Adaptive Stochastic Gradient Algorithm for Black-box Multi-Objective Learning", "authors": [ "Feiyang YE", - "YUEMING LYU", + "Yueming Lyu", "Xuehao Wang", "Yu Zhang", "Ivor Tsang" @@ -22812,10 +22812,10 @@ }, { "id": 19249, - "title": "Transformers vs. Message Passing GNNs: Distinguished in Uniform", + "title": "Distinguished In Uniform: Self-Attention Vs. Virtual Nodes", "authors": [ - "Jan T\u00f6nshoff", "Eran Rosenbluth", + "Jan T\u00f6nshoff", "Martin Ritzert", "Berke Kisin", "Martin Grohe" @@ -22868,7 +22868,7 @@ "id": 18321, "title": "Mediator Interpretation and Faster Learning Algorithms for Linear Correlated Equilibria in General Sequential Games", "authors": [ - "Brian Zhang", + "Brian Hu Zhang", "Gabriele Farina", "Tuomas Sandholm" ], @@ -22989,7 +22989,7 @@ "Yingqian Cui", "Shenglai Zeng", "Hui Liu", - "Charu Aggarwal", + "Charu C. Aggarwal", "Jiliang Tang" ], "abstract": "Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks. These attacks aim to inject poisoning samples into the models' training dataset such that the trained models have inference failures. While previous studies have executed different types of attacks, one major challenge that greatly limits their effectiveness is the uncertainty of the re-training process after the injection of poisoning samples. It includes the uncertainty of training initialization, algorithm and model architecture. To address this challenge, we propose a new strategy called **Sharpness-Aware Data Poisoning Attack (SAPA)**. In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the (approximately) worst re-trained model. Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances various types of poisoning attacks against various types of re-training uncertainty.", @@ -23032,7 +23032,7 @@ "Finn Rietz", "Erik Schaffernicht", "Stefan Heinrich", - "Johannes Stork" + "Johannes A. Stork" ], "abstract": "Reinforcement learning (RL) for complex tasks remains a challenge, primarily due to the difficulties of engineering scalar reward functions and the inherent inefficiency of training models from scratch. Instead, it would be better to specify complex tasks in terms of elementary subtasks and to reuse subtask solutions whenever possible. In this work, we address continuous space lexicographic multi-objective RL problems, consisting of prioritized subtasks, which are notoriously difficult to solve. We show that these can be scalarized with a subtask transformation and then solved incrementally using value decomposition. Exploiting this insight, we propose prioritized soft Q-decomposition (PSQD), a novel algorithm for learning and adapting subtask solutions under lexicographic priorities in continuous state-action spaces. PSQD offers the ability to reuse previously learned subtask solutions in a zero-shot composition, followed by an adaptation step. Its ability to use retained subtask training data for offline learning eliminates the need for new environment interaction during adaptation. We demonstrate the efficacy of our approach by presenting successful learning, reuse, and adaptation results for both low- and high-dimensional simulated robot control tasks, as well as offline learning results. In contrast to baseline approaches, PSQD does not trade off between conflicting subtasks or priority constraints and satisfies subtask priorities during learning. PSQD provides an intuitive framework for tackling complex RL problems, offering insights into the inner workings of the subtask composition.", "type": "Poster", @@ -23050,7 +23050,7 @@ "Kevin Black", "Mitsuhiko Nakamoto", "Pranav Atreya", - "Homer Walke", + "Homer Rich Walke", "Chelsea Finn", "Aviral Kumar", "Sergey Levine" @@ -23070,7 +23070,7 @@ "authors": [ "Raman Dutt", "Ondrej Bohdal", - "Sotirios Tsaftaris", + "Sotirios A. Tsaftaris", "Timothy Hospedales" ], "abstract": "Training models with robust group fairness properties is crucial in ethically sensitive application areas such as medical diagnosis. Despite the growing body of work aiming to minimise demographic bias in AI, this problem remains challenging. A key reason for this challenge is the fairness generalisation gap: High-capacity deep learning models can fit all training data nearly perfectly, and thus also exhibit perfect fairness during training. In this case, bias emerges only during testing when generalisation performance differs across sub-groups. This motivates us to take a bi-level optimisation perspective on fair learning: Optimising the learning strategy based on validation fairness. Specifically, we consider the highly effective workflow of adapting pre-trained models to downstream medical imaging tasks using parameter-efficient fine-tuning (PEFT) techniques. There is a trade-off between updating more parameters, enabling a better fit to the task of interest vs. fewer parameters, potentially reducing the generalisation gap. To manage this tradeoff, we propose FairTune, a framework to optimise the choice of PEFT parameters with respect to fairness. We demonstrate empirically that FairTune leads to improved fairness on a range of medical imaging datasets.", @@ -23092,7 +23092,7 @@ "Millicent Li", "Arnab Sen Sharma", "Aaron Mueller", - "Byron Wallace", + "Byron C Wallace", "David Bau" ], "abstract": "We report the presence of a simple neural mechanism that represents an input-output function as a vector within autoregressive transformer language models (LMs). Using causal mediation analysis on a diverse range of in-context-learning (ICL) tasks, we find that a small number attention heads transport a compact representation of the demonstrated task, which we call a function vector (FV). FVs are robust to changes in context, i.e., they trigger execution of the task on inputs such as zero-shot and natural text settings that do not resemble the ICL contexts from which they are collected. We test FVs across a range of tasks, models, and layers and find strong causal effects across settings in middle layers. We investigate the internal structure of FVs and find while that they often contain information that encodes the output space of the function, this information alone is not sufficient to reconstruct an FV. Finally, we test semantic vector composition in FVs, and find that to some extent they can be summed to create vectors that trigger new complex tasks. Our findings show that compact, causal internal vector representations of function abstractions can be explicitly extracted from LLMs.", @@ -23112,7 +23112,7 @@ "Yukang Chen", "Luozhou WANG", "Shu Liu", - "YINGCONG CHEN" + "Ying-Cong Chen" ], "abstract": "Denoising Diffusion Probabilistic Models (DDPMs) have garnered popularity for data generation across various domains. However, a significant bottleneck is the necessity for whole-network computation during every step of the generative process, leading to high computational overheads. This paper presents a novel framework, Denoising Diffusion Step-aware Models (DDSM), to address this challenge. Unlike conventional approaches, DDSM employs a spectrum of neural networks whose sizes are adapted according to the importance of each generative step, as determined through evolutionary search. This step-wise network variation effectively circumvents redundant computational efforts, particularly in less critical steps, thereby enhancing the efficiency of the diffusion model. Furthermore, the step-aware design can be seamlessly integrated with other efficiency-geared diffusion models such as DDIMs and latent diffusion, thus broadening the scope of computational savings. Empirical evaluations demonstrate that DDSM achieves computational savings of 49% for CIFAR-10, 61% for CelebA-HQ, 59% for LSUN-bedroom, 71% for AFHQ, and 76% for ImageNet, all without compromising the generation quality. Our code and models will be publicly available.", "type": "Poster", @@ -23129,7 +23129,7 @@ "authors": [ "Jingyun Xiao", "Ran Liu", - "Eva Dyer" + "Eva L Dyer" ], "abstract": "Analyzing multivariate time series is important in many domains. However, it has been difficult to learn robust and generalizable representations within multivariate datasets due to complex inter-channel relationships and dynamic shifts. In this paper, we introduce a novel approach for learning spatiotemporal structure and using it to improve the application of transformers to timeseries datasets. Our framework learns a set of group tokens, and builds an instance-specific group embedding (GE) layer that assigns input tokens to a small number of group tokens to incorporate structure into learning. We then introduce a novel architecture, Group-Aware transFormer (GAFormer), which incorporates both spatial and temporal group embeddings to achieve state-of-the-art performance on a number of time-series classification and regression tasks. In evaluations on a number of diverse timeseries datasets, we show that GE on its own can provide a nice enhancement to a number of backbones, and that by coupling spatial and temporal group embeddings, the GAFormer can outperform the existing baselines. Finally, we show how our approach discerns latent structures in data even without information about the spatial ordering of channels, and yields a more interpretable decomposition of spatial and temporal structure underlying complex multivariate datasets.", "type": "Poster", @@ -23168,7 +23168,7 @@ "Haitao Lin", "Cheng Tan", "Zhangyang Gao", - "Stan Z Li" + "Stan Z. Li" ], "abstract": "Recent years have witnessed the great success of graph pre-training for graph representation learning. With hundreds of graph pre-training tasks proposed, integrating knowledge acquired from multiple pre-training tasks has become a popular research topic. We identify two important collaborative processes for this topic: (1) select: how to select an optimal task combination from a given task pool based on their compatibility, and (2) weigh: how to weigh the importance of the selected tasks based on their importance. While there has been a lot of current works focused on weighing, comparatively little effort has been devoted to selecting. In this paper, we propose a novel instance-level framework for integrating multiple graph pre-training tasks, Weigh And Select (WAS), where the two collaborative processes, weighing and selecting, are combined by decoupled siamese networks. Specifically, it first adaptively learns an optimal combination of tasks for each instance from a given task pool, based on which a customized instance-level task weighing strategy is learned. Extensive experiments on 16 graph datasets across node-level and graph-level show that by combining a few simple but classical tasks, WAS can achieve comparable performance to other leading counterparts.", "type": "Poster", @@ -23228,7 +23228,7 @@ }, { "id": 18302, - "title": "Free Lunches in Auxiliary Learning: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost", + "title": "Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost", "authors": [ "Yuan Gao", "WEIZHONG ZHANG", @@ -23291,7 +23291,7 @@ "Renat Sergazinov", "Elizabeth Chun", "Valeriya Rogovchenko", - "Nathaniel Fernandes", + "Nathaniel J Fernandes", "Nicholas Kasman", "Irina Gaynanova" ], @@ -23330,7 +23330,7 @@ "Li Meng", "Morten Goodwin", "Anis Yazidi", - "Paal Engelstad" + "Paal E. Engelstad" ], "abstract": "The manifold hypothesis posits that high-dimensional data often lies on a lower-dimensional manifold and that utilizing this manifold as the target space yields more efficient representations. While numerous traditional manifold-based techniques exist for dimensionality reduction, their application in self-supervised learning has witnessed slow progress. The recent MSimCLR method combines manifold encoding with SimCLR but requires extremely low target encoding dimensions to outperform SimCLR, limiting its applicability. This paper introduces a novel learning paradigm using an unbalanced atlas (UA), capable of surpassing state-of-the-art self-supervised learning approaches. We investigated and engineered the DeepInfomax with an unbalanced atlas (DIM-UA) method by adapting the Spatiotemporal DeepInfomax (ST-DIM) framework to align with our proposed UA paradigm. The efficacy of DIM-UA is demonstrated through training and evaluation on the Atari Annotated RAM Interface (AtariARI) benchmark, a modified version of the Atari 2600 framework that produces annotated image samples for representation learning. The UA paradigm improves existing algorithms significantly as the number of target encoding dimensions grows. For instance, the mean F1 score averaged over categories of DIM-UA is~75% compared to ~70% of ST-DIM when using 16384 hidden units.", "type": "Poster", @@ -23343,13 +23343,13 @@ }, { "id": 17626, - "title": "Controlling Vision-Language Models for Universal Image Restoration", + "title": "Controlling Vision-Language Models for Multi-Task Image Restoration", "authors": [ "Ziwei Luo", "Fredrik K. Gustafsson", "Zheng Zhao", "Jens Sj\u00f6lund", - "Thomas Sch\u00f6n" + "Thomas B. Sch\u00f6n" ], "abstract": "Vision-language models such as CLIP have shown great impact on diverse downstream tasks for zero-shot or label-free predictions. However, when it comes to low-level vision such as image restoration their performance deteriorates dramatically due to corrupted inputs. In this paper, we present a degradation-aware vision-language model (DA-CLIP) to better transfer pretrained vision-language models to low-level vision tasks as a universal framework for image restoration. More specifically, DA-CLIP trains an additional controller that adapts the fixed CLIP image encoder to predict high-quality feature embeddings. By integrating the embedding into an image restoration network via cross-attention, we are able to pilot the model to learn a high-fidelity image reconstruction. The controller itself will also output a degradation feature that matches the real corruptions of the input, yielding a natural classifier for different degradation types. In addition, we construct a mixed degradation dataset with synthetic captions for DA-CLIP training. Our approach advances state-of-the-art performance on both degradation-specific and unified image restoration tasks, showing a promising direction of prompting image restoration with large-scale pretrained vision-language models.", "type": "Poster", @@ -23385,10 +23385,10 @@ "id": 18293, "title": "Few-Shot Detection of Machine-Generated Text using Style Representations", "authors": [ - "Rafael Rivera Soto", + "Rafael Alberto Rivera Soto", "Kailin Koch", "Aleem Khan", - "Barry Chen", + "Barry Y. Chen", "Marcus Bishop", "Nicholas Andrews" ], @@ -23406,7 +23406,7 @@ "title": "Uncertainty Quantification via Stable Distribution Propagation", "authors": [ "Felix Petersen", - "Aashwin Mishra", + "Aashwin Ananda Mishra", "Hilde Kuehne", "Christian Borgelt", "Oliver Deussen", @@ -23425,7 +23425,7 @@ "id": 18290, "title": "Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain", "authors": [ - "Min", + "Marcus J. Min", "Yangruibo Ding", "Luca Buratti", "Saurabh Pujar", @@ -23500,8 +23500,8 @@ "Bunyamin Sisman", "Yi Xu", "Ouye Xie", - "Benjamin Yao", - "son tran", + "Benjamin Z. Yao", + "Son Dinh Tran", "Belinda Zeng" ], "abstract": "Generative modeling via diffusion-based models has been achieving state-of-the-art results on various generation tasks. Most existing diffusion models, however, are limited to a single-generation modeling. Can we generalize diffusion models with the ability of multi-task generative training for more generalizable modeling? In this paper, we propose a principled way to define a diffusion model for this purpose by constructing a unified multi-task diffusion model in a common {\\em diffusion space}. We define the forward diffusion process to be driven by an information aggregation from multiple types of task-data, {\\it e.g.}, images for a generation task and labels for a classification task. In the reverse process, we enforce information sharing by parameterizing a shared backbone denoising network with additional task-specific decoder heads. Such a structure can simultaneously learn to generate different types of multi-task data with a multi-task loss, which is derived from a multi-task variational lower bound that generalizes the standard diffusion model. We propose several multi-task generation settings to verify our framework, including image transition, masked-image training, joint image-label and joint image-representation generative modeling. Extensive experimental results on ImageNet indicate the effectiveness of our framework for various multi-task generative modeling, which we believe is an important research direction worthy of more future explorations.", @@ -23515,7 +23515,7 @@ }, { "id": 19171, - "title": "Fast Updating of Truncated SVD for Representation Learning in Sparse Matrix", + "title": "Fast Updating Truncated SVD for Representation Learning with Sparse Matrices", "authors": [ "Haoran Deng", "Yang Yang", @@ -23683,7 +23683,7 @@ }, { "id": 18286, - "title": "Robust NAS benchmark under adversarial training: assessment, theory, and beyond", + "title": "Robust NAS under adversarial training: benchmark, theory, and beyond", "authors": [ "Yongtao Wu", "Fanghui Liu", @@ -23706,7 +23706,7 @@ "authors": [ "Omer Nahum", "Gali Noti", - "David Parkes", + "David C. Parkes", "Nir Rosenfeld" ], "abstract": "Congestion is a common failure mode of markets, where consumers compete inefficiently on the same subset of goods (e.g., chasing the same small set of properties on a vacation rental platform). The typical economic story is that prices decongest by balancing supply and demand. But in modern online marketplaces, prices are typically set in a decentralized way by sellers, and the information about items is inevitably partial. The power of a platform is limited to controlling *representations*---the subset of information about items presented by default to users. This motivates the present study of *decongestion by representation*, where a platform seeks to learn representations that reduce congestion and thus improve social welfare. The technical challenge is twofold: relying only on revealed preferences from the choices of consumers, rather than true preferences; and the combinatorial problem associated with representations that determine the features to reveal in the default view. We tackle both challenges by proposing a *differentiable proxy of welfare* that can be trained end-to-end on consumer choice data. We develop sufficient conditions for when decongestion promotes welfare, and present the results of extensive experiments on both synthetic and real data that demonstrate the utility of our approach.", @@ -23824,8 +23824,8 @@ "id": 19077, "title": "Attention-based Iterative Decomposition for Tensor Product Representation", "authors": [ - "TAEWON PARK", - "inchul choi", + "Taewon Park", + "Inchul Choi", "Minho Lee" ], "abstract": "In recent research, Tensor Product Representation (TPR) is applied for the systematic generalization task of deep neural networks by learning the compositional structure of data. However, such prior works show limited performance in discovering and representing the symbolic structure from unseen test data because of the incomplete bindings to the structural representations. In this work, we propose an Attention-based Iterative Decomposition (AID) module that can effectively improve the binding for the structured representations encoded from the sequential input features with TPR. Our AID can be easily adapted to any TPR-based model and provides enhanced systematic decomposition through a competitive attention mechanism between input features and structured representations. In our experiments, AID shows effectiveness by significantly improving the performance of TPR-based prior works on the series of systematic generalization tasks. Moreover, in the quantitative and qualitative evaluations, AID produces more compositional and well-bound structural representations than other works.", @@ -23844,7 +23844,7 @@ "LIN Yong", "Lu Tan", "Yifan HAO", - "Honam Wong", + "Ho Nam Wong", "Hanze Dong", "WEIZHONG ZHANG", "Yujiu Yang", @@ -23863,7 +23863,7 @@ "id": 19071, "title": "Bayesian Low-rank Adaptation for Large Language Models", "authors": [ - "Adam Yang", + "Adam X. Yang", "Maxime Robeyns", "Xi Wang", "Laurence Aitchison" @@ -23928,7 +23928,7 @@ "Yu Yao", "Zhuo Huang", "Shiming Chen", - "chuanwu yang", + "Chuanwu Yang", "Mingming Gong", "Tongliang Liu" ], @@ -23991,7 +23991,7 @@ "id": 19044, "title": "AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning", "authors": [ - "Yuwei GUO", + "Yuwei Guo", "Ceyuan Yang", "Anyi Rao", "Zhengyang Liang", @@ -23999,7 +23999,7 @@ "Yu Qiao", "Maneesh Agrawala", "Dahua Lin", - "Bo DAI" + "Bo Dai" ], "abstract": "With the advance of text-to-image (T2I) diffusion models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. However, adding motion dynamics to existing high-quality personalized T2Is and enabling them to generate animations remains an open challenge. In this paper, we present AnimateDiff, a practical framework for animating personalized T2I models without requiring model-specific tuning. At the core of our framework is a plug-and-play motion module that can be trained once and seamlessly integrated into any personalized T2Is originating from the same base T2I. Through our proposed training strategy, the motion module effectively learns transferable motion priors from real-world videos. Once trained, the motion module can be inserted into a personalized T2I model to form a personalized animation generator. We further propose MotionLoRA, a lightweight fine-tuning technique for AnimateDiff that enables a pre-trained motion module to adapt to new motion patterns, such as different shot types, at a low training and data collection cost. We evaluate AnimateDiff and MotionLoRA on several public representative personalized T2I models collected from the community. The results demonstrate that our approaches help these models generate temporally smooth animation clips while preserving the visual quality and motion diversity.", "type": "Spotlight Poster", @@ -24016,7 +24016,7 @@ "id": 19036, "title": "Grokking in Linear Estimators -- A Solvable Model that Groks without Understanding", "authors": [ - "Noam Levi", + "Noam Itzhak Levi", "Alon Beck", "Yohai Bar-Sinai" ], @@ -24060,7 +24060,7 @@ "Yu Sun", "Hao Tian", "Ningyu Zhang", - "hua wu" + "Hua Wu" ], "abstract": "Reward modeling (*a.k.a.*, preference modeling) is instrumental for aligning large language models with human preferences, particularly within the context of reinforcement learning from human feedback (RLHF). While conventional reward models (RMs) have exhibited remarkable scalability, they oft struggle with fundamental functionality such as arithmetic computation, code execution, and factual lookup. In this paper, we propose a tool-augmented preference modeling approach, named Themis, to address these limitations by empowering RMs with access to external environments, including calculators and search engines. This approach not only fosters synergy between tool utilization and reward grading but also enhances interpretive capacity and scoring reliability. Our study delves into the integration of external tools into RMs, enabling them to interact with diverse external sources and construct task-specific tool engagement and reasoning traces in an autoregressive manner. We validate our approach across a wide range of domains, incorporating seven distinct external tools. Our experimental results demonstrate a noteworthy overall improvement of 17.7% across eight tasks in preference ranking. Furthermore, our approach outperforms Gopher 280B by 7.3% on TruthfulQA task in zero-shot evaluation. In human evaluations, RLHF trained with Themis attains an average win rate of 32% when compared to baselines across four distinct tasks. Additionally, we provide a comprehensive collection of tool-related RM datasets, incorporating data from seven distinct tool APIs, totaling 15,000 instances. We have made the code, data, and model checkpoints publicly available to facilitate and inspire further research advancements (https://github.com/ernie-research/Tool-Augmented-Reward-Model).", "type": "Spotlight Poster", @@ -24115,7 +24115,7 @@ }, { "id": 18270, - "title": "Regularized Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity", + "title": "Soft Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity", "authors": [ "Runyu Zhang", "Yang Hu", @@ -24142,7 +24142,7 @@ "Yaxi Lu", "Yankai Lin", "Xin Cong", - "xiangru tan\u2006h", + "Xiangru Tang", "Bill Qian", "Sihan Zhao", "Lauren Hong", @@ -24202,11 +24202,11 @@ }, { "id": 19007, - "title": "A Unified Approach for Online Continuous DR-Submodular Maximization", + "title": "Unified Projection-Free Algorithms for Adversarial DR-Submodular Optimization", "authors": [ "Mohammad Pedramfar", - "Yididiya Nadew", - "Christopher Quinn", + "Yididiya Y. Nadew", + "Christopher John Quinn", "Vaneet Aggarwal" ], "abstract": "This paper introduces unified projection-free Frank-Wolfe type algorithms for adversarial continuous DR-submodular optimization, spanning scenarios such as full information and (semi-)bandit feedback, monotone and non-monotone functions, different constraints, and types of stochastic queries. For every problem considered in the non-monotone setting, the proposed algorithms are either the first with proven sub-linear $\\alpha$-regret bounds or have better $\\alpha$-regret bounds than the state of the art, where $\\alpha$ is a corresponding approximation bound in the offline setting. In the monotone setting, the proposed approach gives state-of-the-art sub-linear $\\alpha$-regret bounds among projection-free algorithms in 7 of the 8 considered cases while matching the result of the remaining case. Additionally, this paper addresses semi-bandit and bandit feedback for adversarial DR-submodular optimization, advancing the understanding of this optimization area.", @@ -24246,9 +24246,9 @@ "Simon Schug", "Seijin Kobayashi", "Yassir Akram", - "Maciej Wo\u0142czyk", - "Alexandra M Proca", - "Johannes von Oswald", + "Maciej Wolczyk", + "Alexandra Maria Proca", + "Johannes Von Oswald", "Razvan Pascanu", "Joao Sacramento", "Angelika Steger" @@ -24267,7 +24267,7 @@ "title": "Lewis's Signaling Game as beta-VAE For Natural Word Lengths and Segments", "authors": [ "Ryo Ueda", - "TADAHIRO TANIGUCHI" + "Tadahiro Taniguchi" ], "abstract": "As a sub-discipline of evolutionary and computational linguistics, emergent communication (EC) studies communication protocols, called emergent languages, arising in simulations where agents communicate. A key goal of EC is to give rise to languages that share statistical properties with natural languages. In this paper, we reinterpret Lewis's signaling game, a frequently used setting in EC, as beta-VAE and reformulate its objective function as ELBO. Consequently, we clarify the existence of prior distributions of emergent languages and show that the choice of the priors can influence their statistical properties. Specifically, we address the properties of word lengths and segmentation, known as Zipf's law of abbreviation (ZLA) and Harris's articulation scheme (HAS), respectively. It has been reported that the emergent languages do not follow them when using the conventional objective. We experimentally demonstrate that by selecting an appropriate prior distribution, more natural segments emerge, while suggesting that the conventional one prevents the languages from following ZLA and HAS.", "type": "Poster", @@ -24280,14 +24280,14 @@ }, { "id": 18261, - "title": "Efficacy of Dual-Encoders for Extreme Multi-label Classification", + "title": "Dual-Encoders for Extreme Multi-label Classification", "authors": [ "Nilesh Gupta", "Fnu Devvrit", "Ankit Singh Rawat", "Srinadh Bhojanapalli", "Prateek Jain", - "Inderjit Dhillon" + "Inderjit S Dhillon" ], "abstract": "Dual-encoder models have demonstrated significant success in dense retrieval tasks for open-domain question answering that mostly involves zero-shot and few-shot scenarios. However, their performance in many-shot retrieval problems where training data is abundant, such as extreme multi-label classification (XMC), remains under-explored. Existing empirical evidence suggests that, for such problems, the dual-encoder method's accuracies lag behind the performance of state-of-the-art (SOTA) extreme classification methods that grow the number of learnable parameters linearly with the number of classes. As a result, some recent extreme classification techniques use a combination of dual-encoders and a learnable classification head for each class to excel on these tasks. In this paper, we investigate the potential of \"pure\" DE models in XMC tasks. Our findings reveal that when trained correctly standard dual-encoders can match or outperform SOTA extreme classification methods by up to 2% at Precision@1 even on the largest XMC datasets while being 20x smaller in terms of the number of trainable parameters. We further propose a differentiable topk error-based loss function, which can be used to specifically optimize for Recall@k metrics. We include our PyTorch implementation along with other resources for reproducing the results in the supplementary material.", "type": "Poster", @@ -24321,11 +24321,11 @@ "id": 18259, "title": "Fast, Expressive $\\mathrm{SE}(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space", "authors": [ - "Erik Bekkers", + "Erik J Bekkers", "Sharvaree Vadgama", "Rob Hesselink", - "Putri van der Linden", - "David Wilson Romero" + "Putri A Van der Linden", + "David W. Romero" ], "abstract": "Based on the theory of homogeneous spaces we derive geometrically optimal edge attributes to be used within the flexible message passing framework. We formalize the notion of weight sharing in convolutional networks as the sharing of message functions over point-pairs that should be treated equally. We define equivalence classes of point-pairs that are identical up to a transformation in the group and derive attributes that uniquely identify these classes. Weight sharing is then obtained by conditioning message functions on these attributes. As an application of the theory, we develop an efficient equivariant group convolutional network for processing 3D point clouds. The theory of homogeneous spaces tells us how to do group convolutions with feature maps over the homogeneous space of positions $\\mathbb{R}^3$, position and orientations $\\mathbb{R}^3 {\\times} S^2$, and the group $\\mathrm{SE}(3)$ itself. Among these, $\\mathbb{R}^3 {\\times} S^2$ is an optimal choice due to the ability to represent directional information, which $\\mathbb{R}^3$ methods cannot, and it significantly enhances computational efficiency compared to indexing features on the full $\\mathrm{SE}(3)$ group. We empirically support this claim by reaching state-of-the-art results --in accuracy and speed-- on three different benchmarks: interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models.", "type": "Poster", @@ -24366,7 +24366,7 @@ "De-An Huang", "Boyi Li", "Jinwoo Shin", - "anima anandkumar" + "Anima Anandkumar" ], "abstract": "Video diffusion models have recently made great progress in generation quality, but are still limited by the high memory and computational requirements. This is because current video diffusion models often attempt to process high-dimensional videos directly. To tackle this issue, we propose content-motion latent diffusion model (CMD), a novel efficient extension of pretrained image diffusion models for video generation. Specifically, we propose an autoencoder that succinctly encodes a video as a combination of a content frame (like an image) and a low-dimensional motion latent representation. The former represents the common content, and the latter represents the underlying motion in the video, respectively. We generate the content frame by fine-tuning a pretrained image diffusion model, and we generate the motion latent representation by training a new lightweight diffusion model. A key innovation here is the design of a compact latent space that can directly utilizes a pretrained image diffusion model, which has not been done in previous latent video diffusion models. This leads to considerably better quality generation and reduced computational costs. For instance, CMD can sample a video 7.7$\\times$ faster than prior approaches by generating a video of 512$\\times$1024 resolution and length 16 in 3.1 seconds. Moreover, CMD achieves an FVD score of 212.7 on WebVid-10M, 27.3% better than the previous state-of-the-art of 292.4.", "type": "Poster", @@ -24382,7 +24382,7 @@ "title": "Neural Polynomial Gabor Fields for Macro Motion Analysis", "authors": [ "Chen Geng", - "Koven Yu", + "Hong-Xing Yu", "Sida Peng", "Xiaowei Zhou", "Jiajun Wu" @@ -24506,7 +24506,7 @@ "Krzysztof Maziarz", "Sarah Lewis", "Marwin Segler", - "Jos\u00e9 Miguel Hern\u00e1ndez Lobato" + "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato" ], "abstract": "Retrosynthesis is the task of proposing a series of chemical reactions to create a desired molecule from simpler, buyable molecules. While previous works have proposed algorithms to find optimal solutions for a range of metrics (e.g. shortest, lowest-cost), these works generally overlook the fact that we have imperfect knowledge of the space of possible reactions, meaning plans created by the algorithm may not work in a laboratory. In this paper we propose a novel formulation of retrosynthesis in terms of stochastic processes to account for this uncertainty. We then propose a novel greedy algorithm called retro-fallback which maximizes the probability that at least one synthesis plan can be executed in the lab. Using in-silico benchmarks we demonstrate that retro-fallback generally produces better sets of synthesis plans than the popular MCTS and retro* algorithms.", "type": "Poster", @@ -24565,7 +24565,7 @@ "Fang Wu", "Zicheng Liu", "Cheng Tan", - "Stan Z Li" + "Stan Z. Li" ], "abstract": "Semi-supervised learning (SSL) has witnessed great progress with various improvements in the self-training framework with pseudo labeling. The main challenge is how to distinguish high-quality pseudo labels against the confirmation bias. However, existing pseudo-label selection strategies are limited to pre-defined schemes or complex hand-crafted policies specially designed for classification, failing to achieve high-quality labels, fast convergence, and task versatility simultaneously. To these ends, we propose a Semi-supervised Reward framework (SemiReward) that predicts reward scores to evaluate and filter out high-quality pseudo labels, which is pluggable to mainstream SSL methods in wide task types and scenarios. To mitigate confirmation bias, SemiReward is trained online in two stages with a generator model and subsampling strategy. With classification and regression tasks on 13 standard SSL benchmarks of three modalities, extensive experiments verify that SemiReward achieves significant performance gains and faster convergence speeds upon Pseudo Label, FlexMatch, and Free/SoftMatch.", "type": "Poster", @@ -24622,7 +24622,7 @@ "authors": [ "Nayoung Lee", "Kartik Sreenivasan", - "Jason Lee", + "Jason D. Lee", "Kangwook Lee", "Dimitris Papailiopoulos" ], @@ -24640,7 +24640,7 @@ "title": "DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer", "authors": [ "Junyuan Hong", - "Jiachen (Tianhao) Wang", + "Jiachen T. Wang", "Chenhui Zhang", "Zhangheng LI", "Bo Li", @@ -24707,7 +24707,7 @@ "title": "On the Scalability and Memory Efficiency of Semidefinite Programs for Lipschitz Constant Estimation of Neural Networks", "authors": [ "Zi Wang", - "Aaron Havens", + "Aaron J Havens", "Alexandre Araujo", "Yang Zheng", "Bin Hu", @@ -24753,7 +24753,7 @@ "Kirill Vishniakov", "Yida Yin", "Zhiqiang Shen", - "trevor darrell", + "Trevor Darrell", "Lingjie Liu", "Zhuang Liu" ], @@ -24808,7 +24808,7 @@ "Lizhang Chen", "Bo Liu", "Kaizhao Liang", - "Qiang Liu" + "qiang liu" ], "abstract": "Lion (Evolved Sign Momentum), a new optimizer discovered through program search, has shown promising results in training large AI models. It achieves results comparable to AdamW but with greater memory efficiency. As what we can expect from the result of the random search, Lion blends a number of elements from existing algorithms, including signed momentum, decoupled weight decay, Polayk and Nesterov momentum, but doesn't fit into any existing category of theoretically grounded optimizers. Thus, even though Lion appears to perform well as a general-purpose optimizer for a wide range of tasks, its theoretical basis remains uncertain. This absence of theoretical clarity limits opportunities to further enhance and expand Lion's efficacy. This work aims to demystify Lion. Using both continuous-time and discrete-time analysis, we demonstrate that Lion is a novel and theoretically grounded approach for minimizing a general loss function $f(x)$ while enforcing a bound constraint $||x||_\\infty \\leq 1/\\lambda$. Lion achieves this through the incorporation of decoupled weight decay, where $\\lambda$ represents the weight decay coefficient. Our analysis is facilitated by the development of a new Lyapunov function for the Lion updates. It applies to a wide range of Lion-$\\phi$ algorithms, where the $sign(\\cdot)$ operator in Lion is replaced by the subgradient of a convex function $\\phi$, leading to the solution of the general composite optimization problem $\\min_x f(x) + \\phi^*(x)$. Our findings provide valuable insights into the dynamics of Lion and pave the way for further enhancements and extensions of Lion-related algorithms.", "type": "Spotlight Poster", @@ -24823,7 +24823,7 @@ "id": 17745, "title": "DeepZero: Scaling Up Zeroth-Order Optimization for Deep Model Training", "authors": [ - "AOCHUAN CHEN", + "Aochuan Chen", "Yimeng Zhang", "Jinghan Jia", "James Diffenderfer", @@ -24875,7 +24875,7 @@ "authors": [ "Jia-Wang Bian", "Wenjing Bian", - "Victor Prisacariu", + "Victor Adrian Prisacariu", "Philip Torr" ], "abstract": "Neural surface reconstruction is sensitive to the camera pose noise, even when state-of-the-art pose estimators like COLMAP or ARKit are used. Existing Pose-NeRF joint optimisation methods have struggled to improve pose accuracy in challenging real-world scenarios. To overcome the challenges, we introduce the pose residual field (PoRF), a novel implicit representation that uses an MLP for regressing pose updates. Compared with the conventional per-frame pose parameter optimisation, this new representation is more robust due to parameter sharing that leverages global information over the entire sequence. Furthermore, we propose an epipolar geometry loss to enhance the supervision that leverages the correspondences exported from COLMAP results without the extra computational overhead. Our method yields promising results. On the DTU dataset, we reduce the rotation error of COLMAP poses by 78\\%, leading to the reduced reconstruction Chamfer distance from 3.48mm to 0.85mm. On the MobileBrick dataset that contains casually captured unbounded 360-degree videos, our method refines ARKit poses and improves the reconstruction F1 score from 69.18 to 75.67, outperforming that with the provided ground-truth pose (75.14). These achievements demonstrate the efficacy of our approach in refining camera poses and improving the accuracy of neural surface reconstruction in real-world scenarios.", @@ -24984,8 +24984,8 @@ "title": "Combinatorial Bandits for Maximum Value Reward Function under Value-Index Feedback", "authors": [ "Yiliu Wang", - "Milan Vojnovic", - "Wei Chen" + "Wei Chen", + "Milan Vojnovic" ], "abstract": "We consider a combinatorial multi-armed bandit problem for maximum value reward function under maximum value and index feedback. This is a new feedback structure that lies in between commonly studied semi-bandit and full-bandit feedback structures. We propose an algorithm and provide a regret bound for problem instances with stochastic arm outcomes according to arbitrary distributions with finite supports. The regret analysis rests on considering an extended set of arms, associated with values and probabilities of arm outcomes, and applying a smoothness condition. Our algorithm achieves a $O((k/\\Delta)\\log(T))$ distribution-dependent and a $\\tilde{O}(\\sqrt{T})$ distribution-independent regret where $k$ is the number of arms selected in each round, $\\Delta$ is a distribution-dependent reward gap and $T$ is the horizon time. Perhaps surprisingly, the regret bound is comparable to previously-known bound under more informative semi-bandit feedback. We demonstrate the effectiveness of our algorithm through experimental results.", "type": "Poster", @@ -25023,7 +25023,7 @@ "Seungjae Shin", "HeeSun Bae", "Byeonghu Na", - "Yoon-Yeong", + "Yoon-Yeong Kim", "Il-chul Moon" ], "abstract": "The objective of domain generalization (DG) is to enhance the transferability of the model learned from a source domain to unobserved domains. To prevent overfitting to a specific domain, Sharpness-Aware Minimization (SAM) reduces the sharpness of the source domain's loss landscape. Although SAM and its variants have delivered significant improvements in DG, we highlight that there's still potential for improvement in generalizing to unknown domains through the exploration on data space. Building on this motivation, this paper introduces an objective rooted in both parameter and data perturbed regions for domain generalization, termed Unknown Domain Inconsistency Minimization (UDIM). UDIM reduces the loss landscape inconsistency between source domain and unknown domains. As unknown domains are inaccessible, these domains are empirically crafted by perturbing instances from the source domain dataset. In particular, by aligning the flat minima acquired in the source domain to the loss landscape of perturbed domains, we expect to achieve generalization grounded on these flat minima for the unknown domains. Theoretically, we validate that merging SAM optimization with the UDIM objective establishes an upper bound for the true objective of the DG task. In an empirical aspect, UDIM consistently outperforms SAM variants across multiple DG benchmark datasets. Notably, UDIM shows statistically significant improvements in scenarios with more restrictive domain information, underscoring UDIM's generalization capability in unseen domains.", @@ -25043,7 +25043,7 @@ "authors": [ "Yuren Cong", "Mengmeng Xu", - "Christian Simon", + "christian simon", "Shoufa Chen", "Jiawei Ren", "Yanping Xie", @@ -25126,7 +25126,7 @@ "Ilan Naiman", "N. Benjamin Erichson", "Pu Ren", - "Michael W Mahoney", + "Michael W. Mahoney", "Omri Azencot" ], "abstract": "Generating realistic time series data is important for numerous engineering and scientific applications. Several existing works tackle this problem using generative adversarial networks, however, GANs are often unstable during training and suffer from mode collpase. While variational autoencoders (VAEs) are more robust to the above issues, surprisingly, they are considered less for time series generation. In this work, we introduce Koopman VAE (KVAE), a new generative framework that is based on a novel design for the model prior, and that can be optimized for either regular and irregular training data. Inspired by the Koopman theory, we represent the latent conditional prior dynamics using a linear map. Our approach enhances generative modeling with two desired features: (i) incorporating domain knowledge can be achieved by leverageing spectral tools that prescribe constraints on the eigenvalues of the linear map; and (ii) studying the qualitative behavior and stablity of the system can be performed using tools from dynamical systems theory. Our results show that KVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks. Whether trained on regular or irregular data, KVAE generates time series that improve both discriminative and predictive metrics. Further, we present visual evidence suggesting that KVAE learns probability density functions that better approximate the empirical ground truth distribution.", @@ -25168,7 +25168,7 @@ "Ofir Nachum", "Yutaka Matsuo", "Aleksandra Faust", - "Shixiang Gu", + "Shixiang Shane Gu", "Izzeddin Gur" ], "abstract": "The progress of autonomous web navigation has been hindered by the dependence on billions of exploratory interactions via online reinforcement learning, and domain-specific model designs that make it difficult to leverage generalization from rich out-of-domain data.In this work, we study data-driven offline training for web agents with vision-language foundation models.We propose an instruction-following multimodal agent, WebGUM, that observes both webpage screenshots and HTML pages and outputs web navigation actions, such as click and type.WebGUM is trained by jointly finetuning an instruction-finetuned language model and a vision encoder with temporal and local perception on a large corpus of demonstrations.We empirically demonstrate this recipe improves the agent's ability of grounded multimodal perception, HTML comprehension, and multi-step reasoning, outperforming prior works by a significant margin. On the MiniWoB, we improve over the previous best offline methods by more than 45.8%, even outperforming online-finetuned SoTA, humans, and GPT-4-based agent. On the WebShop benchmark, our 3-billion-parameter model achieves superior performance to the existing SoTA, PaLM-540B.Furthermore, WebGUM exhibits strong positive transfer to the real-world planning tasks on the Mind2Web.We also collect 347K high-quality demonstrations using our trained models, 38 times larger than prior work, and make them available to promote future research in this direction.", @@ -25206,7 +25206,7 @@ "Yuxuan Jiang", "Yao Mu", "Chen Chen", - "Shengbo Li" + "Shengbo Eben Li" ], "abstract": "Motion prediction is crucial for autonomous vehicles to operate safely in complex traffic environments. Extracting effective spatiotemporal relationships among traffic elements is key to accurate forecasting. Inspired by the successful practice of pretrained large language models, this paper presents SEPT, a modeling framework that leverages self-supervised learning to develop powerful spatiotemporal understanding for complex traffic scenes. Specifically, our approach involves three masking-reconstruction modeling tasks on scene inputs including agents' trajectories and road network, pretraining the scene encoder to capture kinematics within trajectory, spatial structure of road network, and interactions among roads and agents. The pretrained encoder is then finetuned on the downstream forecasting task. Extensive experiments demonstrate that SEPT, without elaborate architectural design or manual feature engineering, achieves state-of-the-art performance on the Argoverse 1 and Argoverse 2 motion forecasting benchmarks, outperforming previous methods on all main metrics by a large margin.", "type": "Poster", @@ -25279,11 +25279,11 @@ }, { "id": 18208, - "title": "Combining Spatial and Temporal Abstraction in Planning for Better Generalization", + "title": "Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning", "authors": [ "Mingde Zhao", "Safa Alver", - "Harm Seijen", + "Harm van Seijen", "Romain Laroche", "Doina Precup", "Yoshua Bengio" @@ -25339,8 +25339,8 @@ "title": "SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training", "authors": [ "Kazem Meidani", - "Seyedeh Parshin Shojaee", - "Chandan Reddy", + "Parshin Shojaee", + "Chandan K. Reddy", "Amir Barati Farimani" ], "abstract": "In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains, and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic unified understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training, which employs joint contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the pre-trained embeddings. By performing latent space analysis, we observe that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. We evaluate SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in few-shot learning scenarios where available data is limited.", @@ -25361,7 +25361,7 @@ "Long Lian", "Baifeng Shi", "Adam Yala", - "trevor darrell", + "Trevor Darrell", "Boyi Li" ], "abstract": "Text-conditioned diffusion models have emerged as a promising tool for neural video generation. However, current models still struggle with intricate spatiotemporal prompts and often generate restricted or incorrect motion (e.g., even lacking the ability to be prompted for objects moving from left to right). To address these limitations, we introduce LLM-grounded Video Diffusion (LVD). Instead of directly generating videos from the text inputs, LVD first leverages a large language model (LLM) to generate dynamic scene layouts based on the text inputs and subsequently uses the generated layouts to guide a diffusion model for video generation. We show that LLMs are able to understand complex spatiotemporal dynamics from text alone and generate layouts that align closely with both the prompts and the object motion patterns typically observed in the real world. We then propose to guide video diffusion models with these layouts by adjusting the attention maps. Our approach is training-free and can be integrated into any video diffusion model that admits classifier guidance. Our results demonstrate that LVD significantly outperforms its base video diffusion model and several strong baseline methods in faithfully generating videos with the desired attributes and motion patterns.", @@ -25410,7 +25410,7 @@ }, { "id": 18886, - "title": "Mitigating Severe Robustness Degradation on Graphs", + "title": "Mitigating Emergent Robustness Degradation while Scaling Graph Learning", "authors": [ "Xiangchi Yuan", "Chunhui Zhang", @@ -25431,7 +25431,7 @@ "id": 18204, "title": "SweetDreamer: Aligning Geometric Priors in 2D diffusion for Consistent Text-to-3D", "authors": [ - "Weiyu LI", + "Weiyu Li", "Rui Chen", "Xuelin Chen", "Ping Tan" @@ -25453,7 +25453,7 @@ "Yilun Xu", "Valentin De Bortoli", "Regina Barzilay", - "Tommi Jaakkola" + "Tommi S. Jaakkola" ], "abstract": "In light of the widespread success of generative models, a significant amount of research has gone into speeding up their sampling time. However, generative models are often sampled multiple times to obtain a diverse set incurring in a cost that is orthogonal to sampling time. We tackle the question of how to improve diversity and sample efficiency by moving beyond the common assumption of independent samples. For this we propose particle guidance, an extension of diffusion-based generative sampling where a joint-particle time-evolving potential enforces diversity. We analyze theoretically the joint distribution that particle guidance generates, its implications on the choice of potential, and the connections with methods in other disciplines. Empirically, we test the framework both in the setting of conditional image generation, where we are able to increase diversity without affecting quality, and molecular conformer generation, where we reduce the state-of-the-art median error by 13% on average.", "type": "Poster", @@ -25527,7 +25527,7 @@ "authors": [ "Xinyun Chen", "Maxwell Lin", - "Nathanael Schaerli", + "Nathanael Sch\u00e4rli", "Denny Zhou" ], "abstract": "Large language models (LLMs) have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging, thus some prior works have designed program repair approaches to improve code generation performance. In this work, we propose self-debugging, which teaches a large language model to debug its predicted program. In particular, we demonstrate that self-debugging can teach the large language model to perform rubber duck debugging; i.e., without any human feedback on the code correctness or error messages, the model is able to identify its mistakes by leveraging code execution and explaining the generated code in natural language. Self-debugging achieves the state-of-the-art performance on several code generation benchmarks, including the Spider dataset for text-to-SQL generation, TransCoder for C++-to-Python translation, and MBPP for text-to-Python generation. On the Spider benchmark where there are no unit tests to verify the correctness of predictions, self-debugging with code explanation consistently improves the baseline by 2-3%, and improves the prediction accuracy on problems of the hardest level by 9%. On TransCoder and MBPP where unit tests are available, self-debugging improves the baseline accuracy by up to 12%. Meanwhile, by leveraging feedback messages and reusing failed predictions, self-debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10$\\times$ candidate programs.", @@ -25586,7 +25586,7 @@ "Zhenting Wang", "Chen Chen", "Lingjuan Lyu", - "Dimitris Metaxas", + "Dimitris N. Metaxas", "Shiqing Ma" ], "abstract": "Recent text-to-image diffusion models have shown surprising performance in generating high-quality images. However, concerns have arisen regarding the unauthorized data usage during the training or fine-tuning process. One example is when a model trainer collects a set of images created by a particular artist and attempts to train a model capable of generating similar images without obtaining permission and giving credit to the artist. To address this issue, we propose a method for detecting such unauthorized data usage by planting the injected memorization into the text-to-image diffusion models trained on the protected dataset. Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions that are nearly imperceptible to human but can be captured and memorized by diffusion models. By analyzing whether the model has memorized the injected content (i.e., whether the generated images are processed by the injected post-processing function), we can detect models that had illegally utilized the unauthorized data. Experiments on Stable Diffusion and VQ Diffusion with different model training or fine-tuning methods (i.e, LoRA, DreamBooth, and standard training) demonstrate the effectiveness of our proposed method in detecting unauthorized data usages.", @@ -25605,7 +25605,7 @@ "title": "Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions", "authors": [ "Taehyeon Kim", - "JOONKEE KIM", + "Joonkee Kim", "Gihun Lee", "Se-Young Yun" ], @@ -25626,10 +25626,10 @@ "Chenyang Ma", "Tianle Chen", "Cheng Perng Phoo", - "Katie Luo", + "Katie Z Luo", "Yurong You", "Mark Campbell", - "Kilian Weinberger", + "Kilian Q Weinberger", "Bharath Hariharan", "Wei-Lun Chao" ], @@ -25647,7 +25647,7 @@ "title": "Near-Optimal Solutions of Constrained Learning Problems", "authors": [ "Juan Elenter", - "Luiz Chamon", + "Luiz F. O. Chamon", "Alejandro Ribeiro" ], "abstract": "With the widespread adoption of machine learning systems, the need to curtail their behavior has become increasingly apparent. This is evidenced by recent advancements towards developing models that satisfy robustness, safety and fairness requirements. Imposing these requirements leads to constrained learning problems, which can be tackled with dual ascent methods. However, convergence guarantees for dual ascent algorithms typically involve a randomized or averaged sequence of primal iterates. These solutions are impractical, since they require storing an ever growing sequence of models. Although it has been observed that final iterates perform well in practice, theoretical guarantees for their optimality and feasibility have remained elusive. In this work, we characterize the infeasibility of Lagrangian minimizers associated with optimal dual variables, which leads to a sub-optimality bound for best primal iterates. To do this, we leverage the fact that constrained learning problems are parametrized versions of convex functional programs. This bound sheds light on how the richness of the parametrization and the curvature of the objective impact the convergence of primal iterates. We empirically validate this finding in learning problems with fairness constraints.", @@ -25739,7 +25739,7 @@ "authors": [ "Jose Javier Gonzalez Ortiz", "John Guttag", - "Adrian Dalca" + "Adrian V Dalca" ], "abstract": "Hypernetworks, neural networks that predict the parameters of another neural network, are powerful models that have been successfully used in diverse applications from image generation to multi-task learning. Unfortunately, existing hypernetworks are often challenging to train. Training typically converges far more slowly than for non-hypernetwork models, and the rate of convergence can be very sensitive to hyperparameter choices. In this work, we identify a fundamental and previously unidentified problem that contributes to the challenge of training hypernetworks: a magnitude proportionality between the inputs and outputs of the hypernetwork. We demonstrate both analytically and empirically that this can lead to unstable optimization, thereby slowing down convergence, and sometimes even preventing any learning. We present a simple solution to this problem using a revised hypernetwork formulation that we call Magnitude Invariant Parametrizations (MIP). We demonstrate the proposed solution on several hypernetwork tasks, where it consistently stabilizes training and achieves faster convergence. Furthermore, we perform a comprehensive ablation study including choices of activation function, normalization strategies, input dimensionality, and hypernetwork architecture; and find that MIP improves training in all scenarios. We provide easy-to-use code that can turn existing networks into MIP-based hypernetworks.", "type": "Poster", @@ -25770,12 +25770,12 @@ }, { "id": 18189, - "title": "$\\mathcal{B}$-Coder: On Value-Based Deep Reinforcement Learning for Program Synthesis", + "title": "$\\mathcal{B}$-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis", "authors": [ "Zishun Yu", "Yunzhe Tao", "Liyu Chen", - "TAO SUN", + "Tao Sun", "Hongxia Yang" ], "abstract": "Program synthesis aims to create accurate, executable code from natural language descriptions. This field has leveraged the power of reinforcement learning (RL) in conjunction with large language models (LLMs), significantly enhancing code generation capabilities. This integration focuses on directly optimizing functional correctness, transcending conventional supervised losses. While current literature predominantly favors policy-based algorithms, attributes of program synthesis suggest a natural compatibility with value-based methods. This stems from rich collection of off-policy programs developed by human programmers, and the straightforward verification of generated programs through automated unit testing (i.e. easily obtainable rewards in RL language). Diverging from the predominant use of policy-based algorithms, our work explores the applicability of value-based approaches, leading to the development of our $\\mathcal{B}$-Coder (pronounced Bellman coder). Yet, training value-based methods presents challenges due to the enormous search space inherent to program synthesis. To this end, we propose an initialization protocol for RL agents utilizing pre-trained LMs and a conservative Bellman operator to reduce training complexities. Moreover, we demonstrate how to leverage the learned value functions as a dual strategy to post-process generated programs. Our empirical evaluations demonstrated $\\mathcal{B}$-Coder's capability in achieving state-of-the-art performance compared with policy-based methods. Remarkably, this achievement is reached with minimal reward engineering effort, highlighting the effectiveness of value-based RL, independent of reward designs.", @@ -25811,7 +25811,7 @@ }, { "id": 18187, - "title": "Unifying Feature and Cost Aggregation with Transformers for Dense Correspondence", + "title": "Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence", "authors": [ "Sunghwan Hong", "Seokju Cho", @@ -25832,7 +25832,7 @@ "title": "Nougat: Neural Optical Understanding for Academic Documents", "authors": [ "Lukas Blecher", - "Guillem Cucurull Preixens", + "Guillem Cucurull", "Thomas Scialom", "Robert Stojnic" ], @@ -25890,8 +25890,8 @@ "authors": [ "Haoran Xu", "Young Jin Kim", - "Amr Mohamed Nabil Aly Aly Sharaf", - "Hany Awadalla" + "Amr Sharaf", + "Hany Hassan Awadalla" ], "abstract": "Generative Large Language Models (LLMs) have achieved remarkable advancements in various NLP tasks. However, these advances have not been reflected in the translation task, especially those with moderate model sizes (i.e., 7B or 13B parameters), which still lag behind conventional supervised encoder-decoder translation models. Previous studies have attempted to improve the translation capabilities of these LLMs, but their gains have been limited. In this study, we propose a novel fine-tuning approach for LLMs that is specifically designed for the translation task, eliminating the need for the abundant parallel data that traditional translation models usually depend on.Our approach consists of two fine-tuning stages: initial fine-tuning on monolingual data followed by subsequent fine-tuning on a small set of high-quality parallel data. We introduce the LLM developed through this strategy as **A**dvanced **L**anguage **M**odel-based tr**A**nslator (**ALMA**). Based on LLaMA-2 as our underlying model, our results show that the model can achieve an average improvement of more than 12 BLEU and 12 COMET over its zero-shot performance across 10 translation directions from the WMT'21 (2 directions) and WMT'22 (8 directions) test datasets. The performance is significantly better than all prior work and even superior to the NLLB-54B model \\citep{nllb} and GPT-3.5-text-davinci-003, with only 7B or 13B parameters. This method establishes the foundation for a novel training paradigm in machine translation.", "type": "Poster", @@ -25906,13 +25906,13 @@ "id": 18181, "title": "AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation", "authors": [ - "Zihao Tang", - "Shengyu Zhang", + "Zihao TANG", "Zheqi Lv", + "Shengyu Zhang", "Yifan Zhou", "Xinyu Duan", - "Kun Kuang", - "Fei Wu" + "Fei Wu", + "Kun Kuang" ], "abstract": "Due to privacy or patent concerns, a growing number of large models are released without granting access to their training data, making transferring their knowledge inefficient and problematic. In response, Data-Free Knowledge Distillation (DFKD) methods have emerged as direct solutions. However, simply adopting models derived from DFKD for real-world applications suffers significant performance degradation, due to the discrepancy between teachers' training data and real-world scenarios (student domain). The degradation stems from the portions of teachers' knowledge that are not applicable to the student domain. They are specific to the teacher domain and would undermine students' performance. Hence, selectively transferring teachers' appropriate knowledge becomes the primary challenge in DFKD. In this work, we propose a simple but effective method AuG-KD. It utilizes an uncertainty-guided and sample-specific anchor to align student-domain data with the teacher domain and leverages a generative method to progressively trade off the learning process between OOD knowledge distillation and domain-specific information learning via mixup learning. Extensive experiments in 3 datasets and 8 settings demonstrate the stability and superiority of our approach.", "type": "Poster", @@ -25927,10 +25927,10 @@ "id": 18178, "title": "REFACTOR: Learning to Extract Theorems from Proofs", "authors": [ - "Jin Zhou", + "Jin Peng Zhou", "Yuhuai Wu", "Qiyang Li", - "Roger Grosse" + "Roger Baker Grosse" ], "abstract": "Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show on a set of unseen proofs, REFACTOR is able to extract 19.6\\% of the theorems that humans would use to write the proofs. When applying the model to the existing Metamath library, REFACTOR extracted 16 new theorems. With newly extracted theorems, we show that the existing proofs in the MetaMath database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of 733.5 times, and help shorten the proof lengths. Lastly, we demonstrate that the prover trained on the new-theorem refactored dataset proves more test theorems and outperforms state-of-the-art baselines by frequently leveraging a diverse set of newly extracted theorems.", "type": "Poster", @@ -25968,11 +25968,11 @@ "authors": [ "Jihao Andreas Lin", "Shreyas Padhy", - "Javier Antor\u00e1n", + "Javier Antoran", "Austin Tripp", "Alexander Terenin", "Csaba Szepesvari", - "Jos\u00e9 Miguel Hern\u00e1ndez Lobato", + "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato", "David Janz" ], "abstract": "We study the optimisation problem associated with Gaussian process regression using squared loss. The most common approach to this problem is to apply an exact solver, such as conjugate gradient descent, either directly on the problem or on a reduced-order version of it. However, stochastic gradient descent has recently gained traction in the Gaussian process literature, driven largely by its successes in deep learning. In this paper, we show that this approach when done right---by which we mean using specific insights from the optimisation and kernel communities---is highly effective.We thus introduce a particular stochastic dual gradient descent algorithm, conveniently implementable with a few lines of code using any deep learning framework. We explain our design decisions by illustrating their advantage against alternatives with ablation studies.We then show that the new method is highly competitive: our evaluations on standard regression benchmarks and a Bayesian optimisation task set our approach apart from conjugate gradients, variational Gaussian process approximations, and a prior version of stochastic gradient descent tailored for Gaussian processes. On a molecular binding affinity prediction task, our method places Gaussian process regression on par in terms of performance with graph neural networks.", @@ -26045,7 +26045,7 @@ "id": 18172, "title": "Efficient Dynamics Modeling in Interactive Environments with Koopman Theory", "authors": [ - "Arnab Mondal", + "Arnab Kumar Mondal", "Siba Smarak Panigrahi", "Sai Rajeswar", "Kaleem Siddiqi", @@ -26083,7 +26083,7 @@ "id": 18167, "title": "Can We Evaluate Domain Adaptation Models Without Target-Domain Labels?", "authors": [ - "JIANFEI YANG", + "Jianfei Yang", "Hanjie Qian", "Yuecong Xu", "Kai Wang", @@ -26124,8 +26124,8 @@ "title": "Adaptive Federated Learning with Auto-Tuned Clients", "authors": [ "Junhyung Lyle Kim", - "Mohammad Taha Toghani", - "Cesar Uribe", + "Taha Toghani", + "Cesar A Uribe", "Anastasios Kyrillidis" ], "abstract": "Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data. While being a flexible framework, where the distribution of local data, participation rate, and computing power of each client can greatly vary, such flexibility gives rise to many new challenges, especially in the hyperparameter tuning on the client side. We propose $\\Delta$-SGD, a simple step size rule for SGD that enables each client to use its own step size by adapting to the local smoothness of the function each client is optimizing. We provide theoretical and empirical results where the benefit of the client adaptivity is shown in various FL scenarios.", @@ -26141,7 +26141,7 @@ "id": 18824, "title": "Differentiable Euler Characteristic Transforms for Shape Classification", "authors": [ - "Ernst Roell", + "Ernst R\u00f6ell", "Bastian Rieck" ], "abstract": "The Euler Characteristic Transform (ECT) has proven to be a powerful representation, combining geometrical and topological characteristics of shapes and graphs. However, the ECT was hitherto unable to learn task-specific representations. We overcome this issue and develop a novel computational layer that enables learning the ECT in an end-to-end fashion. Our method, DECT, is fast and computationally efficient, while exhibiting performance on a par with more complex models in both graph and point cloud classification tasks. Moreover, we show that this seemingly unexpressive statistic still provides the same topological expressivity as more complex topological deep learning layers provide.", @@ -26252,7 +26252,7 @@ "Quentin Delfosse", "Patrick Schramowski", "Martin Mundt", - "Alejandro Molina Ramirez", + "Alejandro Molina", "Kristian Kersting" ], "abstract": "Latest insights from biology show that intelligence not only emerges from the connections between neurons, but that individual neurons shoulder more computational responsibility than previously anticipated. Specifically, neural plasticity should be critical in the context of constantly changing reinforcement learning (RL) environments, yet current approaches still primarily employ static activation functions. In this work, we motivate the use of adaptable activation functions in RL and show that rational activation functions are particularly suitable for augmenting plasticity. Inspired by residual networks, we derive a condition under which rational units are closed under residual connections and formulate a naturally regularised version. The proposed joint-rational activation allows for desirable degrees of flexibility, yet regularises plasticity to an extent that avoids overfitting by leveraging a mutual set of activation function parameters across layers. We demonstrate that equipping popular algorithms with (joint) rational activations leads to consistent improvements on different games from the Atari Learning Environment benchmark, notably making DQN competitive to DDQN and Rainbow.", @@ -26270,7 +26270,7 @@ "authors": [ "Jiuding Sun", "Chantal Shaib", - "Byron Wallace" + "Byron C Wallace" ], "abstract": "Instruction fine-tuning has recently emerged as a promising approach for improving the zero-shot capabilities of Large Language Models (LLMs) on new tasks. This technique has shown particular strength in improving the performance of modestly sized LLMs, sometimes inducing performance competitive with much larger model variants. In this paper, we ask two questions: (1) How sensitive are instruction-tuned models to the particular phrasings of instructions, and, (2) How can we make them more robust to such natural language variation? To answer the former, we collect a set of 319 instructions manually written by NLP practitioners for over 80 unique tasks included in widely used benchmarks, and we evaluate the variance and average performance of these instructions as compared to instruction phrasings observed during instruction fine-tuning. We find that using novel (unobserved) but appropriate instruction phrasings consistently degrades model performance, sometimes substantially so. Further, such natural instructions yield a wide variance in downstream performance, despite their semantic equivalence. Put another way, instruction-tuned models are not especially robust to instruction re-phrasings. We propose a simple method to mitigate this issue by introducing ``soft prompt'' embedding parameters and optimizing these to maximize the similarity between representations of semantically equivalent instructions. We show that this method consistently improves the robustness of instruction-tuned models.", "type": "Spotlight Poster", @@ -26290,7 +26290,7 @@ "Mengzhao Chen", "Shitao Tang", "Kaipeng Zhang", - "Gao Peng", + "Peng Gao", "Fengwei An", "Yu Qiao", "Ping Luo" @@ -26308,7 +26308,7 @@ }, { "id": 18145, - "title": "A Private Watermark for Large Language Models", + "title": "An Unforgeable Publicly Verifiable Watermark for Large Language Models", "authors": [ "Aiwei Liu", "Leyi Pan", @@ -26316,7 +26316,7 @@ "Shuang Li", "Lijie Wen", "Irwin King", - "Philip Yu" + "Philip S. Yu" ], "abstract": "Recently, text watermarking algorithms for large language models (LLMs) have been proposed to mitigate the potential harms of text generated by LLMs, including fake news and copyright issues. However, current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting.To address this limitation, we propose the first private watermarking algorithm that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages. Meanwhile, the token embedding parameters are shared between the generation and detection networks, which makes the detection network achieve a high accuracy very efficiently.Experiments demonstrate that Our algorithm attains high detection accuracy and computational efficiency through neural networks with a minimized number of parameters. Subsequent analysis confirms the high complexity involved in reverse-engineering the watermark generation algorithms from the detection network.", "type": "Poster", @@ -26332,7 +26332,7 @@ "title": "LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models", "authors": [ "Gunho Park", - "baeseong park", + "Baeseong park", "Minsub Kim", "Sungjae Lee", "Jeonghoon Kim", @@ -26396,7 +26396,7 @@ "authors": [ "Pablo Barcelo", "Alexander Kozachinskiy", - "Anthony Lin", + "Anthony Widjaja Lin", "Vladimir Podolskii" ], "abstract": "We contribute to the study of formal languages that can be recognized by transformer encoders. We focus on two self-attention mechanisms: (1) UHAT (Unique Hard Attention Transformers) and (2) AHAT (Average Hard Attention Transformers). UHAT encoders are known to recognize only languages inside the circuit complexity class ${\\sf AC}^0$, i.e., accepted by a family of poly-sized and depth-bounded boolean circuits with unbounded fan-ins. On the other hand, AHAT encoders can recognize languages outside ${\\sf AC}^0$), but their expressive power still lies within the bigger circuit complexity class ${\\sf TC}^0$, i.e., ${\\sf AC}^0$-circuits extended by majority gates.We first show a negative result that there is an ${\\sf AC}^0$-language that cannot be recognized by an UHAT encoder. On the positive side, we show that UHAT encoders can recognize a rich fragment of ${\\sf AC}^0$-languages, namely, all languages definable in first-order logic with arbitrary unary numerical predicates. This logic, includes, for example, all regular languages from ${\\sf AC}^0$. We then show that AHAT encoders can recognize all languages of our logic even when we enrich it with counting terms. We apply these results to derive new results on the expressive power of UHAT and AHAT up to permutation of letters (a.k.a. Parikh images).", @@ -26527,12 +26527,12 @@ }, { "id": 18134, - "title": "DyVal: Graph-informed Dynamic Evaluation of Large Language Models", + "title": "DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks", "authors": [ "Kaijie Zhu", "Jiaao Chen", "Jindong Wang", - "Neil Gong", + "Neil Zhenqiang Gong", "Diyi Yang", "Xing Xie" ], @@ -26550,12 +26550,12 @@ "title": "Confronting Reward Model Overoptimization with Constrained RLHF", "authors": [ "Ted Moskovitz", - "Aaditya Singh", + "Aaditya K Singh", "DJ Strouse", "Tuomas Sandholm", "Ruslan Salakhutdinov", "Anca Dragan", - "Stephen McAleer" + "Stephen Marcus McAleer" ], "abstract": "Large language models are typically aligned with human preferences by optimizing reward models (RMs) fitted to human feedback. However, human preferences are multi-faceted, and it is increasingly common to derive reward from a composition of simpler reward models which each capture a different aspect of language quality. This itself presents a challenge, as it is difficult to appropriately weight these component RMs when combining them. Compounding this difficulty, because any RM is only a proxy for human evaluation, this process is vulnerable to *overoptimization*, wherein past a certain point, accumulating higher reward is associated with worse human ratings. In this paper, we perform the first study on overoptimization in composite RMs, showing that correlation between component RMs has a significant effect on the locations of these points. We then introduce an approach to solve this issue using constrained reinforcement learning as a means of preventing the agent from exceeding each RM's threshold of usefulness. Our method addresses the problem of weighting component RMs by learning dynamic weights, naturally given by the Lagrange multipliers. As a result, each RM stays within the range at which it is an effective proxy, improving evaluation performance. Finally, we introduce an adaptive method using gradient-free optimization to identify and optimize towards these points during a single run.", "type": "Spotlight Poster", @@ -26590,8 +26590,8 @@ "Chongyu Fan", "Jiancheng Liu", "Yihua Zhang", - "Dennis Wei", "Eric Wong", + "Dennis Wei", "Sijia Liu" ], "abstract": "With evolving data regulations, machine unlearning (MU) has become an important tool for fostering trust and safety in today's AI models. However, existing MU methods focusing on data and/or weight perspectives often suffer limitations in unlearning accuracy, stability, and cross-domain applicability. To address these challenges, we introduce the concept of 'weight saliency' for MU, drawing parallels with input saliency in model explanation. This innovation directs MU's attention toward specific model weights rather than the entire model, improving effectiveness and efficiency. The resultant method that we call saliency unlearning (SalUn) narrows the performance gap with 'exact' unlearning (model retraining from scratch after removing the forgetting data points). To the best of our knowledge, SalUn is the first principled MU approach that can effectively erase the influence of forgetting data, classes, or concepts in both image classification and generation tasks. As highlighted below, For example, SalUn yields a stability advantage in high-variance random data forgetting, e.g., with a 0.2% gap compared to exact unlearning on the CIFAR-10 dataset. Moreover, in preventing conditional diffusion models from generating harmful images, SalUn achieves nearly 100% unlearning accuracy, outperforming current state-of-the-art baselines like Erased Stable Diffusion and Forget-Me-Not. Codes are available at https://github.com/OPTML-Group/Unlearn-Saliency.**WARNING**: This paper contains model outputs that may be offensive in nature.", @@ -26724,10 +26724,10 @@ "id": 18126, "title": "Dual Associated Encoder for Face Restoration", "authors": [ - "Yu-Ju Tsai", + "YU-JU TSAI", "Yu-Lun Liu", "Lu Qi", - "Kelvin Chan", + "Kelvin C.K. Chan", "Ming-Hsuan Yang" ], "abstract": "Restoring facial details from low-quality (LQ) images has remained challenging due to the nature of the problem caused by various degradations in the wild. The codebook prior has been proposed to address the ill-posed problems by leveraging an autoencoder and learned codebook of high-quality (HQ) features, achieving remarkable quality.However, existing approaches in this paradigm frequently depend on a single encoder pre-trained on HQ data for restoring HQ images, disregarding the domain gap and distinct feature representations between LQ and HQ images.As a result, encoding LQ inputs with the same encoder could be insufficient, resulting in imprecise feature representation and leading to suboptimal performance.To tackle this problem, we propose a novel dual-branch framework named $\\textit{DAEFR}$. Our method introduces an auxiliary LQ branch that extracts domain-specific information from the LQ inputs. Additionally, we incorporate association training to promote effective synergy between the two branches, enhancing code prediction and restoration quality.We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets, demonstrating its superior performance in restoring facial details.Source codes and trained models will be publicly released.", @@ -26804,7 +26804,7 @@ "id": 18123, "title": "Diffeomorphic Mesh Deformation via Efficient Optimal Transport for Cortical Surface Reconstruction", "authors": [ - "Thanh-Tung Le", + "Thanh Tung Le", "Khai Nguyen", "shanlin sun", "Kun Han", @@ -26825,7 +26825,7 @@ "title": "Learning Polynomial Problems with $SL(2, \\mathbb{R})$-Equivariance", "authors": [ "Hannah Lawrence", - "Mitchell Harris" + "Mitchell Tong Harris" ], "abstract": "Optimizing and certifying the positivity of polynomials are fundamental primitives across mathematics and engineering applications, from dynamical systems to operations research. However, solving these problems in practice requires large semidefinite programs, with poor scaling in dimension and degree. In this work, we demonstrate for the first time that neural networks can effectively solve such problems in a data-driven fashion, achieving tenfold speedups while retaining high accuracy. Moreover, we observe that these polynomial learning problems are equivariant to the non-compact group $SL(2,\\mathbb{R})$, which consists of area-preserving linear transformations. We therefore adapt our learning pipelines to accommodate this structure, including data augmentation, a new $SL(2,\\mathbb{R})$-equivariant architecture, and an architecture equivariant with respect to its maximal compact subgroup, $SO(2, \\mathbb{R})$. Surprisingly, the most successful approaches in practice do not enforce equivariance to the entire group, which we prove arises from an unusual lack of architecture universality for $SL(2,\\mathbb{R})$ in particular. A consequence of this result, which is of independent interest, is that there exists an equivariant function for which there is no sequence of equivariant approximating polynomials. This is a rare example of a symmetric problem where data augmentation outperforms a fully equivariant architecture, and provides interesting lessons in both theory and practice for other problems with non-compact symmetries.", "type": "Poster", @@ -26896,7 +26896,7 @@ "title": "Language Model Beats Diffusion - Tokenizer is key to visual generation", "authors": [ "Lijun Yu", - "Jos\u00e9 Lezama", + "Jose Lezama", "Nitesh Bharadwaj Gundavarapu", "Luca Versari", "Kihyuk Sohn", @@ -26908,7 +26908,7 @@ "Boqing Gong", "Ming-Hsuan Yang", "Irfan Essa", - "David Ross", + "David A Ross", "Lu Jiang" ], "abstract": "While Large Language Models (LLMs) are the dominant models for generative tasks in language, they do not perform as well as diffusion models on image and video generation. To effectively use LLMs for visual generation, one crucial component is the visual tokenizer that maps pixel-space inputs to discrete tokens appropriate for LLM learning. In this paper, we introduce \\modelname{}, a video tokenizer designed to generate concise and expressive tokens for both videos and images using a common token vocabulary. Equipped with this new tokenizer, we show that LLMs outperform diffusion models on standard image and video generation benchmarks including ImageNet and Kinetics. In addition, we demonstrate that our tokenizer surpasses the previously top-performing video tokenizer on two more tasks: (1) video compression comparable to the next-generation video codec (VCC) according to human evaluations, and (2) learning effective representations for action recognition tasks.", @@ -26925,7 +26925,7 @@ "title": "Understanding Certified Training with Interval Bound Propagation", "authors": [ "Yuhao Mao", - "Mark N M\u00fcller", + "Mark Niklas Mueller", "Marc Fischer", "Martin Vechev" ], @@ -27048,7 +27048,7 @@ "authors": [ "Dingling Yao", "Danru Xu", - "S\u00e9bastien Lachapelle", + "Sebastien Lachapelle", "Sara Magliacane", "Perouz Taslakian", "Georg Martius", @@ -27066,10 +27066,10 @@ }, { "id": 18748, - "title": "Protein Multimer Structure Prediction via PPI-guided Prompt Learning", + "title": "Protein Multimer Structure Prediction via Prompt Learning", "authors": [ "Ziqi Gao", - "Xiangguo SUN", + "Xiangguo Sun", "Zijing Liu", "Yu Li", "Hong Cheng", @@ -27086,14 +27086,14 @@ }, { "id": 18108, - "title": "Greedy Sequential Execution: Solving Homogeneous and Heterogeneous Cooperative Tasks with a Unified Framework", + "title": "Solving Homogeneous and Heterogeneous Cooperative Tasks with Greedy Sequential Execution", "authors": [ "Shanqi Liu", "Dong Xing", "Pengjie Gu", + "Xinrun Wang", "Bo An", - "Yong Liu", - "Xinrun Wang" + "Yong Liu" ], "abstract": "Effectively handling both homogeneous and heterogeneous tasks is crucial for the practical application of cooperative agents. However, existing solutions have not been successful in addressing both types of tasks simultaneously. On one hand, value-decomposition-based approaches demonstrate superior performance in homogeneous tasks. Nevertheless, they tend to produce agents with similar policies, which is unsuitable for heterogeneous tasks. On the other hand, solutions based on personalized observation or assigned roles are well-suited for heterogeneous tasks. However, they often lead to a trade-off situation where the agent's performance in homogeneous scenarios is negatively affected due to the aggregation of distinct policies. An alternative approach is to adopt sequential execution policies, which offer a flexible form for learning both types of tasks. However, learning sequential execution policies poses challenges in terms of credit assignment, and the lack of sufficient information about subsequently executed agents can lead to sub-optimal solutions. To tackle these issues, this paper proposes Greedy Sequential Execution (GSE) as a solution to learn the optimal policy that covers both scenarios. In the proposed GSE framework, we introduce an individual utility function into the framework of value decomposition to consider the complex interactions between agents. This function is capable of representing both the homogeneous and heterogeneous optimal policies. Furthermore, we utilize a greedy marginal contribution calculated by the utility function as the credit value of the sequential execution policy to address the credit assignment problem. We evaluated GSE in both homogeneous and heterogeneous scenarios. The results demonstrate that GSE achieves significant improvement in performance across multiple domains, especially in scenarios involving both homogeneous and heterogeneous tasks.", "type": "Spotlight Poster", @@ -27115,7 +27115,7 @@ "Bowen Song", "Qiang Lou", "Jian Jiao", - "Denis Charles" + "Denis X Charles" ], "abstract": "Large language models (LLMs) have made impressive progress in natural language processing. These models rely on proper human instructions (or prompts) to generate suitable responses. However, the potential of LLMs are not fully harnessed by commonly-used prompting methods: many human-in-the-loop algorithms employ ad-hoc procedures for prompt selection; while auto prompt generation approaches are essentially searching all possible prompts randomly and inefficiently. We propose Evoke, an automatic prompt refinement framework. In Evoke, there are two instances of a same LLM: one as a reviewer (LLM-Reviewer), it scores the current prompt; the other as an author (LLM-Author), it edits the prompt by considering the edit history and the reviewer's feedback. Such an author-reviewer feedback loop ensures that the prompt is refined in each iteration. We further aggregate a data selection approach to Evoke, where only the hard samples are exposed to the LLM. The hard samples are more important because the LLM can develop deeper understanding of the tasks out of them, while the model may already know how to solve the easier cases. Experimental results show that Evoke significantly outperforms existing methods. For instance, in the challenging task of logical fallacy detection, Evoke scores above 80, while all other baseline methods struggle to reach 20.", "type": "Poster", @@ -27133,8 +27133,8 @@ "Runtian Zhai", "Rattana Pukdee", "Roger Jin", - "Nina Balcan", - "Pradeep K Ravikumar" + "Maria Florina Balcan", + "Pradeep Kumar Ravikumar" ], "abstract": "Unlabeled data is a key component of modern machine learning. In general, the roleof unlabeled data is to impose a form of smoothness, usually from the similarityinformation encoded in a base kernel, such as the \u03f5-neighbor kernel or the adjacencymatrix of a graph. This work revisits the classical idea of spectrally transformedkernel regression (STKR), and provides a new class of general and scalable STKRestimators able to leverage unlabeled data. Intuitively, via spectral transformation,STKR exploits the data distribution for which unlabeled data can provide additionalinformation. First, we show that STKR is a principled and general approach,by characterizing a universal type of \u201ctarget smoothness\u201d, and proving that anysufficiently smooth function can be learned by STKR. Second, we provide scalableSTKR implementations for the inductive setting and a general transformationfunction, while prior work is mostly limited to the transductive setting. Third, wederive statistical guarantees for two scenarios: STKR with a known polynomialtransformation, and STKR with kernel PCA when the transformation is unknown.Overall, we believe that this work helps deepen our understanding of how to workwith unlabeled data, and its generality makes it easier to inspire new methods.", "type": "Spotlight Poster", @@ -27154,7 +27154,7 @@ "Josh Merel", "Alexander Winkler", "Jing Huang", - "Kris Kitani", + "Kris M. Kitani", "Weipeng Xu" ], "abstract": "We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control. Due to the high-dimensionality of humanoid control as well as the inherent difficulties in reinforcement learning, prior methods have focused on learning skill embeddings for a narrow range of movement styles (e.g. locomotion, game characters) from specialized motion datasets. This limited scope hampers its applicability in complex tasks. Our work closes this gap, significantly increasing the coverage of motion representation space. To achieve this, we first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset. We then create our motion representation by distilling skills directly from the imitator. This is achieved using an encoder-decoder structure with a variational information bottleneck. Additionally, we jointly learn a prior conditioned on proprioception (humanoid's own pose and velocities) to improve model expressiveness and sampling efficiency for downstream tasks. Sampling from the prior, we can generate long, stable, and diverse human motions. Using this latent space for hierarchical RL, we show that our policies solve tasks using natural and realistic human behavior. We demonstrate the effectiveness of our motion representation by solving generative tasks and motion tracking using VR controllers.", @@ -27170,7 +27170,7 @@ "id": 18107, "title": "PhyloGFN: Phylogenetic inference with generative flow networks", "authors": [ - "MING YANG ZHOU", + "Ming Yang Zhou", "Zichao Yan", "Elliot Layne", "Nikolay Malkin", @@ -27194,7 +27194,7 @@ "id": 18106, "title": "PolyVoice: Language Models for Speech to Speech Translation", "authors": [ - "Qianqian Dong", + "Qian qian Dong", "Zhiying Huang", "Qiao Tian", "Chen Xu", @@ -27285,8 +27285,8 @@ "id": 18101, "title": "Contrastive Learning is Spectral Clustering on Similarity Graph", "authors": [ - "Yifan Zhang", "Zhiquan Tan", + "Yifan Zhang", "Jingqin Yang", "Yang Yuan" ], @@ -27375,7 +27375,7 @@ "Jiuyong Li", "Lin Liu", "Jixue Liu", - "Thuc Le" + "Thuc Duy Le" ], "abstract": "This paper studies the challenging problem of estimating causal effects from observational data, in the presence of unobserved confounders. The two-stage least square (TSLS) method and its variants with a standard instrumental variable (IV) are commonly used to eliminate confounding bias, including the bias caused by unobserved confounders, but they rely on the linearity assumption. Besides, the strict condition of unconfounded instruments posed on a standard IV is too strong to be practical. To address these challenging and practical problems of the standard IV method (linearity assumption and the strict condition), in this paper, we use a conditional IV (CIV) to relax the unconfounded instrument condition of standard IV and propose a non-linear \\underline{CIV} regression with \\underline{C}onfounding \\underline{B}alancing \\underline{R}epresentation \\underline{L}earning, CBRL.CIV, for jointly eliminating the confounding bias from unobserved confounders and balancing the observed confounders, without the linearity assumption. We theoretically demonstrate the soundness of CBRL.CIV. Extensive experiments on synthetic and two real-world datasets show the competitive performance of CBRL.CIV against state-of-the-art IV-based estimators and superiority in dealing with the non-linear situation.", "type": "Poster", @@ -27410,7 +27410,7 @@ "authors": [ "Siqi Zhang", "Sayantan Choudhury", - "Sebastian Stich", + "Sebastian U Stich", "Nicolas Loizou" ], "abstract": "Distributed and federated learning algorithms and techniques associated primarily with minimization problems. However, with the increase of minimax optimization and variational inequality problems in machine learning, the necessity of designing efficient distributed/federated learning approaches for these problems is becoming more apparent. In this paper, we provide a unified convergence analysis of communication-efficient local training methods for distributed variational inequality problems (VIPs). Our approach is based on a general key assumption on the stochastic estimates that allows us to propose and analyze several novel local training algorithms under a single framework for solving a class of structured non-monotone VIPs. We present the first local gradient descent-accent algorithms with provable improved communication complexity for solving distributed variational inequalities on heterogeneous data. The general algorithmic framework recovers state-of-the-art algorithms and their sharp convergence guarantees when the setting is specialized to minimization or minimax optimization problems. Finally, we demonstrate the strong performance of the proposed algorithms compared to state-of-the-art methods when solving federated minimax optimization problems.", @@ -27426,9 +27426,9 @@ "id": 18097, "title": "Langevin Monte Carlo for strongly log-concave distributions: Randomized midpoint revisited", "authors": [ - "LU YU", + "Lu Yu", "Avetik Karagulyan", - "Arnak Dalalyan" + "Arnak S. Dalalyan" ], "abstract": "We revisit the problem of sampling from a target distribution that has a smooth strongly log-concave density everywhere in $\\mathbb{R}^p$. In this context, if no additional density information is available, the randomized midpoint discretization for the kinetic Langevin diffusion is known to be the most scalable method in high dimensions with large condition numbers. Our main result is a nonasymptotic and easy to compute upper bound on the $W_2$-error of this method. To provide a more thorough explanation of our method for establishing the computable upper bound, we conduct an analysis of the midpoint discretization for the vanilla Langevin process. This analysis helps to clarify the underlying principles and provides valuable insights that we use to establish an improved upper bound for the kinetic Langevin process with the midpoint discretization. Furthermore, by applying these techniques we establish new guarantees for the kinetic Langevin process with Euler discretization, which have a better dependence on the condition number than existing upper bounds", "type": "Poster", @@ -27444,7 +27444,7 @@ "title": "Fusing Models with Complementary Expertise", "authors": [ "Hongyi Wang", - "Felipe Polo", + "Felipe Maia Polo", "Yuekai Sun", "Souvik Kundu", "Eric Xing", @@ -27465,9 +27465,9 @@ "authors": [ "Rujie Wu", "Xiaojian Ma", - "Qing Li", "Zhenliang Zhang", "Wei Wang", + "Qing Li", "Song-Chun Zhu", "Yizhou Wang" ], @@ -27486,7 +27486,7 @@ "authors": [ "Kai Yi", "Nidham Gazagnadou", - "Peter Richtarik", + "Peter Richt\u00e1rik", "Lingjuan Lyu" ], "abstract": "The interest in federated learning has surged in recent research due to its unique ability to train a global model using privacy-secured information held locally on each client. This paper pays particular attention to the issue of client-side model heterogeneity, a pervasive challenge in the practical implementation of FL that escalates its complexity. Assuming a scenario where each client possesses varied memory storage, processing capabilities and network bandwidth - a phenomenon referred to as system heterogeneity - there is a pressing need to customize a unique model for each client. In response to this, we present an effective and adaptable federated framework FedP3, representing Federated Personalized and Privacy-friendly network Pruning, tailored for model heterogeneity scenarios. Our proposed methodology can incorporate and adapt well-established techniques to its specific instances.", @@ -27655,7 +27655,7 @@ "title": "Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression", "authors": [ "Ivan Butakov", - "Aleksandr Tolmachev", + "Alexander Tolmachev", "Sofia Malanchuk", "Anna Neopryatnaya", "Alexey Frolov", @@ -27710,7 +27710,7 @@ }, { "id": 18661, - "title": "Exploring the Relationship Between Model Architecture and In-Context Learning Ability", + "title": "Is attention required for ICL? Exploring the Relationship Between Model Architecture and In-Context Learning Ability", "authors": [ "Ivan Lee", "Nan Jiang", @@ -27783,10 +27783,10 @@ "title": "Feature emergence via margin maximization: case studies in algebraic tasks", "authors": [ "Depen Morwani", - "Benjamin Edelman", + "Benjamin L. Edelman", "Costin-Andrei Oncescu", "Rosie Zhao", - "Sham Kakade" + "Sham M. Kakade" ], "abstract": "Understanding the internal representations learned by neural networks is a cornerstone challenge in the science of machine learning. While there have been significant recent strides in some cases towards understanding *how* neural networks implement specific target functions, this paper explores a complementary question -- *why* do networks arrive at particular computational strategies? Our inquiry focuses on the algebraic learning tasks of modular addition, sparse parities, and finite group operations. Our primary theoretical findings analytically characterize the features learned by stylized neural networks for these algebraic tasks. Notably, our main technique demonstrates how the principle of margin maximization alone can be used to fully specify the features learned by the network. Specifically, we prove that the trained networks utilize Fourier features to perform modular addition and employ features corresponding to irreducible group-theoretic representations to perform compositions in general groups, aligning closely with the empirical observations of Nanda et al. (2023) and Chughtai et al. (2023). More generally, we hope our techniques can help to foster a deeper understanding of why neural networks adopt specific computational strategies.", "type": "Spotlight Poster", @@ -27860,11 +27860,11 @@ "title": "Poly-View Contrastive Learning", "authors": [ "Amitis Shidani", - "Dan Busbridge", "R Devon Hjelm", "Jason Ramapuram", + "Russell Webb", "Eeshan Gunesh Dhekane", - "Russell Webb" + "Dan Busbridge" ], "abstract": "Contrastive learning typically matches pairs of related views among a number of unrelated negatives. These two related be generated (e.g. by augmentations) or occur naturally. We investigate matching when there are more than two related views which we call poly-view tasks, and derive new representation learning objectives using information maximization and sufficient statistics. We show that with unlimited computation, one should maximize the number of related views, and with a fixed compute budget, it is beneficial to decrease the number of unique samples whilst increasing the number of views of those samples. In particular, poly-view contrastive models trained for 128 epochs with batch size 256 outperform SimCLR trained for 1024 epochs at batch size 4096 on ImageNet1k, challenging the belief that contrastive models require large batch sizes and many training epochs.", "type": "Poster", @@ -27899,7 +27899,7 @@ }, { "id": 18068, - "title": "Disentangling Time Series Representations via Contrastive based $l$-Variational Inference", + "title": "Disentangling Time Series Representations via Contrastive Independence-of-Support on $l$-Variational Inference", "authors": [ "Khalid Oublal", "Said Ladjal", @@ -27925,7 +27925,7 @@ "Simon Guist", "Le Chen", "Daniel Haeufle", - "Bernhard Schoelkopf", + "Bernhard Sch\u00f6lkopf", "Dieter B\u00fcchler" ], "abstract": "Policy gradient methods hold great potential for solving complex continuous control tasks. Still, their training efficiency can be improved by exploiting structure within the optimization problem. Recent work indicates that supervised learning can be accelerated by leveraging the fact that gradients lie in a low-dimensional and slowly-changing subspace. In this paper, we demonstrate the existence of such gradient subspaces for policy gradient algorithms despite the continuously changing data distribution inherent to reinforcement learning. Our findings reveal promising directions for more efficient reinforcement learning, e.g., through improving parameter-space exploration or enabling second-order optimization.", @@ -28054,7 +28054,7 @@ "title": "Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization", "authors": [ "Yibing Liu", - "Chris Xing TIAN", + "Chris XING TIAN", "Haoliang Li", "Lei Ma", "Shiqi Wang" @@ -28094,7 +28094,7 @@ "Emmanuel Abbe", "Samy Bengio", "Etai Littwin", - "Joshua Susskind" + "Joshua M. Susskind" ], "abstract": "We investigate the capability of Transformer large language models (LLMs) to generalize on unseen symbols when trained on tasks that rely on abstract symbols (e.g., variables in programming and mathematics). Such a 'variable-binding' capability has long been studied in the neuroscience literature as one of the most basic 'reasoning' capabilities. For (i) binary classification tasks, we prove that Transformers can generalize to unseen symbols but require astonishingly large training data. For (ii) tasks with labels dependent on input symbols, we show an ''inverse scaling law'': Transformers fail to generalize to unseen symbols as their embedding dimension increases. For both cases (i) and (ii), we propose a Transformer modification, adding two trainable parameters per head that can reduce the amount of data needed.", "type": "Poster", @@ -28114,9 +28114,9 @@ "Dong Gong", "Mingming Gong", "Biwei Huang", - "Anton Hengel", + "Anton van den Hengel", "Kun Zhang", - "Qinfeng Shi" + "Javen Qinfeng Shi" ], "abstract": "Causal representation learning aims to unveil latent high-level causal representations from observed low-level data. One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as \\textit{identifiability}. A recent breakthrough explores identifiability by leveraging the change of causal influences among latent causal variables across multiple environments \\citep{liu2022identifying}. However, this progress rests on the assumption that the causal relationships among latent causal variables adhere strictly to linear Gaussian models. In this paper, we extend the scope of latent causal models to involve nonlinear causal relationships, represented by polynomial models, and general noise distributions conforming to the exponential family. Additionally, we investigate the necessity of imposing changes on all causal parameters and present partial identifiability results when part of them remains unchanged. Further, we propose a novel empirical estimation method, grounded in our theoretical finding, that enables learning consistent latent causal representations. Our experimental results, obtained from both synthetic and real-world data, validate our theoretical contributions concerning identifiability and consistency.", "type": "Poster", @@ -28174,7 +28174,7 @@ "authors": [ "Philippe Chlenski", "Ethan Turok", - "Antonio Moretti", + "Antonio Khalil Moretti", "Itsik Pe'er" ], "abstract": "Hyperbolic geometry is gaining traction in machine learning due to its capacity to effectively capture hierarchical structures in real-world data. Hyperbolic spaces, where neighborhoods grow exponentially, offer substantial advantages and have consistently delivered state-of-the-art results across diverse applications. However, hyperbolic classifiers often grapple with computational challenges. Methods reliant on Riemannian optimization frequently exhibit sluggishness, stemming from the increased computational demands of operations on Riemannian manifolds. In response to these challenges, we present HyperDT, a novel extension of decision tree algorithms into hyperbolic space. Crucially, HyperDT eliminates the need for computationally intensive Riemannian optimization, numerically unstable exponential and logarithmic maps, or pairwise comparisons between points by leveraging inner products to adapt Euclidean decision tree algorithms to hyperbolic space. Our approach is conceptually straightforward and maintains constant-time decision complexity while mitigating the scalability issues inherent in high-dimensional Euclidean spaces. Building upon HyperDT, we introduce HyperRF, a hyperbolic random forest model. Extensive benchmarking across diverse datasets underscores the superior performance of these models, providing a swift, precise, accurate, and user-friendly toolkit for hyperbolic data analysis.", @@ -28188,7 +28188,7 @@ }, { "id": 18057, - "title": "Dissecting sample hardness: Fine-grained analysis of Hardness Characterization Methods", + "title": "Dissecting Sample Hardness: A Fine-Grained Analysis of Hardness Characterization Methods for Data-Centric AI", "authors": [ "Nabeel Seedat", "Fergus Imrie", @@ -28245,7 +28245,7 @@ "authors": [ "Derek Lim", "Haggai Maron", - "Marc T Law", + "Marc T. Law", "Jonathan Lorraine", "James Lucas" ], @@ -28287,7 +28287,7 @@ "authors": [ "Hao Chen", "Jindong Wang", - "Ankit Parag Shah", + "Ankit Shah", "Ran Tao", "Hongxin Wei", "Xing Xie", @@ -28305,7 +28305,7 @@ }, { "id": 18554, - "title": "Mol-Instructions - A Large-Scale Biomolecular Instruction Dataset for Large Language Models", + "title": "Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models", "authors": [ "Yin Fang", "Xiaozhuan Liang", @@ -28329,7 +28329,7 @@ "id": 18548, "title": "Sample-Efficient Linear Representation Learning from Non-IID Non-Isotropic Data", "authors": [ - "Thomas T. Zhang", + "Thomas TCK Zhang", "Leonardo Felipe Toso", "James Anderson", "Nikolai Matni" @@ -28507,8 +28507,8 @@ "Yufei Huang", "Siyuan Li", "Haitao Lin", - "Nitesh Chawla", - "Stan Z Li" + "Nitesh V Chawla", + "Stan Z. Li" ], "abstract": "Protein-Protein Interactions (PPIs) are fundamental in various biological processes and play a key role in life activities. The growing demand and cost of experimental PPI assays require computational methods for efficient PPI prediction. While existing methods rely heavily on protein sequence for PPI prediction, it is the protein structure that is the key to determine the interactions. To take both protein modalities into account, we define the microenvironment of an amino acid residue by its sequence and structural contexts, which describe the surrounding chemical properties and geometric features. In addition, microenvironments defined in previous work are largely based on experimentally assayed physicochemical properties, for which the \"vocabulary\" is usually extremely small. This makes it difficult to cover the diversity and complexity of microenvironments. In this paper, we propose Microenvironment-Aware Protein Embedding for PPI prediction (MPAE-PPI), which encodes microenvironments into chemically meaningful discrete codes via a sufficiently large microenvironment \"vocabulary\" (i.e., codebook). Moreover, we propose a novel pre-training strategy, namely Masked Codebook Modeling (MCM), to capture the dependencies between different microenvironments by randomly masking the codebook and reconstructing the input. With the learned microenvironment codebook, we can reuse it as an off-the-shelf tool to efficiently and effectively encode proteins of different sizes and functions for large-scale PPI prediction. Extensive experiments show that MAPE-PPI can scale to PPI prediction with millions of PPIs with superior trade-offs between effectiveness and computational efficiency than the state-of-the-art competitors.", "type": "Spotlight Poster", @@ -28571,7 +28571,7 @@ "Soroosh Mariooryad", "Ehud Rivlin", "RJ Skerry-Ryan", - "Michele Tadmor Ramanovich" + "Michelle Tadmor Ramanovich" ], "abstract": "We present a novel approach to adapting pre-trained large language models (LLMs) to perform question answering (QA) and speech continuation. By endowing the LLM with a pre-trained speech encoder, our model becomes able to take speech inputs and generate speech outputs. The entire system is trained end-to-end and operates directly on spectrograms, simplifying our architecture. Key to our approach is a training objective that jointly supervises speech recognition, text continuation, and speech synthesis using only paired speech-text pairs, enabling a `cross-modal' chain-of-thought within a single decoding pass. Our method surpasses existing spoken language models in speaker preservation and semantic coherence. Furthermore, the proposed model improves upon direct initialization in retaining the knowledge of the original LLM as demonstrated through spoken QA datasets.", "type": "Poster", @@ -28640,7 +28640,7 @@ "authors": [ "Ant\u00f3nio Farinhas", "Chrysoula Zerva", - "Dennis Ulmer", + "Dennis Thomas Ulmer", "Andre Martins" ], "abstract": "Split conformal prediction has recently sparked great interest due to its ability to provide formally guaranteed uncertainty sets or intervals for predictions made by black-box neural models, ensuring a predefined probability of containing the actual ground truth. While the original formulation assumes data exchangeability, some extensions handle non-exchangeable data, which is often the case in many real-world scenarios. In parallel, some progress has been made in conformal methods that provide statistical guarantees for a broader range of objectives, such as bounding the best $F_1$-score or minimizing the false negative rate in expectation. In this paper, we leverage and extend these two lines of work by proposing non-exchangeable conformal risk control, which allows controlling the expected value of any monotone loss function when the data is not exchangeable. Our framework is flexible, makes very few assumptions, and allows weighting the data based on its statistical similarity with the test examples; a careful choice of weights may result on tighter bounds, making our framework useful in the presence of change points, time series, or other forms of distribution drift. Experiments with both synthetic and real world data show the usefulness of our method.", @@ -28654,13 +28654,13 @@ }, { "id": 18038, - "title": "Feasibility-Guided Safe Offline Reinforcement Learning", + "title": "Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model", "authors": [ "Yinan Zheng", "Jianxiong Li", "Dongjie Yu", "Yujie Yang", - "Shengbo Li", + "Shengbo Eben Li", "Xianyuan Zhan", "Jingjing Liu" ], @@ -28746,7 +28746,7 @@ "title": "Understanding Catastrophic Forgetting in Language Models via Implicit Inference", "authors": [ "Suhas Kotha", - "Jacob Springer", + "Jacob Mitchell Springer", "Aditi Raghunathan" ], "abstract": "We lack a systematic understanding of the effects of fine-tuning (via methods such as instruction-tuning or reinforcement learning from human feedback), particularly on tasks outside the narrow fine-tuning distribution. In a simplified scenario, we demonstrate that improving performance on tasks within the fine-tuning data distribution comes at the expense of capabilities on other tasks. We hypothesize that language models implicitly infer the task of the prompt and that fine-tuning skews this inference towards tasks in the fine-tuning distribution. To test this, we propose Conjugate Prompting, which artificially makes the task look farther from the fine-tuning distribution while requiring the same capability, and we find that this recovers some of the pretraining capabilities on our synthetic setup. Since real-world fine-tuning distributions are predominantly English, we apply conjugate prompting to recover pretrained capabilities in LLMs by simply translating the prompts to different languages. This allows us to recover the in-context learning abilities lost via instruction tuning, and more concerningly, recover harmful content generation suppressed by safety fine-tuning in chatbots like ChatGPT.", @@ -28813,7 +28813,7 @@ }, { "id": 18033, - "title": "Lightweight Language Model Calibration for Open-ended Question Answering with Varied Answer Lengths", + "title": "LitCab: Lightweight Language Model Calibration over Short- and Long-form Responses", "authors": [ "Xin Liu", "Muhammad Khalifa", @@ -28837,7 +28837,7 @@ "Yanhao Wu", "Wei Ke", "Mathieu Salzmann", - "Sabine Susstrunk" + "Sabine S\u00fcsstrunk" ], "abstract": "Dense Self-Supervised Learning (SSL) creates positive pairs by establishing correspondences between regions or points, thereby aiming to preserve local features, for example of individual objects.However, existing approaches tend to couple objects by leaking information from the neighboring contextual regions when the pairs have a limited overlap. In this paper, we first quantitatively identify and confirm the existence of such a coupling phenomenon. We then address it by developing a remarkably simple yet highly effective solution comprising a novel augmentation method, Region Collaborative Cutout (RCC), and a corresponding decoupling branch. Importantly, our design is versatile and can be seamlessly integrated into existing SSL frameworks, whether based on Convolutional Neural Networks (CNNs) or Vision Transformers (ViTs). We conduct extensive experiments, incorporating our solution into two CNN-based and two ViT-based methods, with results confirming the effectiveness of our approach. Moreover, we provide empirical evidence that our method significantly contributes to the disentanglement of feature representations among objects, both in quantitative and qualitative terms.", "type": "Poster", @@ -28905,11 +28905,11 @@ }, { "id": 18030, - "title": "TASK PLANNING FOR VISUAL ROOM REARRANGEMENT UNDER PARTIAL OBSERVABILITY", + "title": "Task Planning for Visual Room Rearrangement under Partial Observability", "authors": [ - "DIPANJAN DAS", "Karan Mirakhor", "Sourav Ghosh", + "Dipanjan Das", "Brojeshwar Bhowmick" ], "abstract": "This paper presents a novel hierarchical task planner under partial observabilitythat empowers an embodied agent to use visual input to efficiently plan a sequenceof actions for simultaneous object search and rearrangement in an untidy room,to achieve a desired tidy state. The paper introduces (i) a novel Search Networkthat utilizes commonsense knowledge from large language models to find unseenobjects, (ii) a Deep RL network trained with proxy reward, along with (iii) a novelgraph-based state representation to produce a scalable and effective planner thatinterleaves object search and rearrangement to minimize the number of steps takenand overall traversal of the agent, as well as to resolve blocked goal and swapcases, and (iv) a sample-efficient cluster-biased sampling for simultaneous trainingof the proxy reward network along with the Deep RL network. Furthermore,the paper presents new metrics and a benchmark dataset - RoPOR, to measurethe effectiveness of rearrangement planning. Experimental results show that ourmethod significantly outperforms the state-of-the-art rearrangement methods Weihset al. (2021a); Gadre et al. (2022); Sarch et al. (2022); Ghosh et al. (2022).", @@ -29018,7 +29018,7 @@ }, { "id": 18437, - "title": "Balancing Act: Sparse Models with Constrained Disparate Impact", + "title": "Balancing Act: Constraining Disparate Impact in Sparse Models", "authors": [ "Meraj Hashemizadeh", "Juan Ramirez", @@ -29038,7 +29038,7 @@ }, { "id": 18024, - "title": "Analyzing and Improving OT-based Adversarial Networks", + "title": "Analyzing and Improving Optimal-Transport-based Adversarial Networks", "authors": [ "Jaemoo Choi", "Jaewoong Choi", @@ -29134,10 +29134,10 @@ "id": 19752, "title": "Is ImageNet worth 1 video? Learning strong image encoders from 1 long unlabelled video", "authors": [ - "Shashank Venkataramanan", + "Shashanka Venkataramanan", "Mamshad Nayeem Rizve", "Joao Carreira", - "Yuki Asano", + "Yuki M Asano", "Yannis Avrithis" ], "abstract": "Self-supervised learning has unlocked the potential of scaling up pretraining to billions of images, since annotation is unnecessary. But are we making the best use of data? How more economical can we be? In this work, we attempt to answer this question by making two contributions. First, we investigate first-person videos and introduce a ``Walking Tours'' dataset. These videos are high-resolution, hours-long, captured in a single uninterrupted take, depicting a large number of objects and actions with natural scene transitions. They are unlabeled and uncurated, thus realistic for self-supervision and comparable with human learning. Second, we introduce a novel self-supervised image pretraining method tailored for learning from continuous videos. Existing methods typically adapt image-based pretraining approaches to incorporate more frames. Instead, we advocate a ``tracking to learn to recognize'' approach. Our method called DoRA, leads to attention maps that **D**isc**O**ver and t**RA**ck objects over time in an end-to-end manner, using transformer cross-attention. We derive multiple views from the tracks and use them in a classical self-supervised distillation loss. Using our novel approach, a single Walking Tours video remarkably becomes a strong competitor to ImageNet for several image and video downstream tasks.", @@ -29153,7 +29153,7 @@ "id": 18022, "title": "Pushing Boundaries: Mixup's Influence on Neural Collapse", "authors": [ - "Quinn Fisher", + "Quinn LeBlanc Fisher", "Haoming Meng", "Vardan Papyan" ], @@ -29190,7 +29190,7 @@ "Bowen Shi", "XIAOPENG ZHANG", "Yaoming Wang", - "Li Jin", + "Jin Li", "Wenrui Dai", "Junni Zou", "Hongkai Xiong", @@ -29207,7 +29207,7 @@ }, { "id": 18408, - "title": "Improving Language Models with Advantage-based Offline Policy Gradients", + "title": "Leftover-Lunch: Advantage-based Offline Reinforcement Learning for Language Models", "authors": [ "Ashutosh Baheti", "Ximing Lu", @@ -29287,7 +29287,7 @@ "title": "Dynamics-Informed Protein Design with Structure Conditioning", "authors": [ "Urszula Julia Komorowska", - "Simon Mathis", + "Simon V Mathis", "Kieran Didi", "Francisco Vargas", "Pietro Lio", @@ -29361,7 +29361,7 @@ "id": 18015, "title": "MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning", "authors": [ - "Zayne Sprague", + "Zayne Rea Sprague", "Xi Ye", "Kaj Bostrom", "Swarat Chaudhuri", @@ -29404,7 +29404,7 @@ "Xinting Huang", "Deng Cai", "Xiaojun Quan", - "Wei BI", + "Wei Bi", "Shuming Shi" ], "abstract": "While training large language models (LLMs) from scratch can generate models with distinct functionalities and strengths, it comes at significant costs and may result in redundant capabilities. Alternatively, a cost-effective and compelling approach is to merge existing pre-trained LLMs into a more potent model. However, due to the varying architectures of these LLMs, directly blending their weights is impractical. In this paper, we introduce the notion of knowledge fusion for LLMs, aimed at combining the capabilities of existing LLMs and transferring them into a single LLM. By leveraging the generative distributions of source LLMs, we externalize their collective knowledge and unique strengths, thereby potentially elevating the capabilities of the target model beyond those of any individual source LLM. We validate our approach using three popular LLMs with different architectures\u2014Llama-2, MPT, and OpenLLaMA\u2014across various benchmarks and tasks. Our findings confirm that the fusion of LLMs can improve the performance of the target model across a range of capabilities such as reasoning, commonsense, and code generation. Our code, model weights, and data are public at \\url{https://github.com/fanqiwan/FuseLLM}.", @@ -29545,7 +29545,7 @@ }, { "id": 18003, - "title": "What Makes a Good Prune? Optimal Unstructured Pruning for Maximal Cosine Similarity", + "title": "What Makes a Good Prune? Maximal Unstructured Pruning for Maximal Cosine Similarity", "authors": [ "Gabryel Mason-Williams", "Fredrik Dahlqvist" @@ -29611,7 +29611,7 @@ }, { "id": 18002, - "title": "ETGraph: A Pioneering Dataset Bridging Ethereum and Twitter", + "title": "EX-Graph: A Pioneering Dataset Bridging Ethereum and X", "authors": [ "Qian Wang", "Zhen Zhang", @@ -29695,7 +29695,7 @@ "title": "Statistical Perspective of Top-K Sparse Softmax Gating Mixture of Experts", "authors": [ "Huy Nguyen", - "Pedram Akbarian Saravi", + "Pedram Akbarian", "Fanqi Yan", "Nhat Ho" ], @@ -29735,9 +29735,9 @@ "Hee Suk Yoon", "Eunseop Yoon", "Joshua Tian Jin Tee", - "Mark Hasegawa-Johnson", + "Mark A. Hasegawa-Johnson", "Yingzhen Li", - "Chang Yoo" + "Chang D. Yoo" ], "abstract": "In deep learning, test-time adaptation has gained attention as a method for model fine-tuning without the need for labeled data. A prime exemplification is the recently proposed test-time prompt tuning for large-scale vision-language models such as CLIP. Unfortunately, these prompts have been mainly developed to improve accuracy, overlooking the importance of calibration\u2014a crucial aspect for quantifying prediction uncertainty. However, traditional calibration methods rely on substantial amounts of labeled data, making them impractical for test-time scenarios. To this end, this paper explores calibration during test-time prompt tuning by leveraging the inherent properties of CLIP. Through a series of observations, we find that the prompt choice significantly affects the calibration in CLIP, where the prompts leading to higher text feature dispersion result in better-calibrated predictions. Introducing the Average Text Feature Dispersion (ATFD), we establish its relationship with calibration error and present a novel method, Calibrated Test-time Prompt Tuning (C-TPT), for optimizing prompts during test-time with enhanced calibration. Through extensive experiments on different CLIP architectures and datasets, we show that C-TPT can effectively improve the calibration of test-time prompt tuning without needing labeled data. The code will be publicly available.", "type": "Poster", @@ -29775,7 +29775,7 @@ "authors": [ "Yuchen Hu", "CHEN CHEN", - "Huck Yang", + "Chao-Han Huck Yang", "Ruizhe Li", "Chao Zhang", "Pin-Yu Chen", @@ -29794,7 +29794,7 @@ "id": 18284, "title": "Learning the greatest common divisor: explaining transformer predictions", "authors": [ - "Fran\u00e7ois Charton" + "Francois Charton" ], "abstract": "We train small transformers to calculate the greatest common divisor (GCD) of two positive integers, and show that their predictions are fully explainable. During training, models learn a list $\\mathcal D$ of divisors, and predict the largest element of $\\mathcal D$ that divides both inputs. We also show that training distributions have a large impact on performance. Models trained from uniform operands only learn a handful of GCD (up to $38$ out of $100$). Training from log-uniform operands boosts performance to $73$ correct GCD, and balancing the distribution of GCD, from inverse square to log-uniform, to $91$. On the other hand, a uniform distribution of GCD in the training set breaks model explainability.", "type": "Spotlight Poster", @@ -29812,8 +29812,8 @@ "Haiyan Jiang", "Vincent Zoonekynd", "Giulia De Masi", - "Huan Xiong", - "Bin Gu" + "Bin Gu", + "Huan Xiong" ], "abstract": "Spiking Neural Networks (SNNs) are attracting growing interest for their energy-efficient computing when implemented on neuromorphic hardware. However, directly training SNNs, even adopting batch normalization (BN), is highly challenging due to their non-differentiable activation function and the temporally delayed accumulation of outputs over time. For SNN training, this temporal accumulation gives rise to Temporal Covariate Shifts (TCS) along the temporal dimension, a phenomenon that would become increasingly pronounced with layerwise computations across multiple layers and time-steps. In this paper, we introduce TAB (Temporal Accumulated Batch Normalization), a novel SNN batch normalization method that addresses the temporal covariate shift issue by aligning with neuron dynamics (specifically the accumulated membrane potential) and utilizing temporal accumulated statistics for data normalization. Within its framework, TAB effectively encapsulates the historical temporal dependencies that underlie the membrane potential accumulation process, thereby establishing a natural connection between neuron dynamics and TAB batch normalization. Experimental results on CIFAR-10, CIFAR-100, and DVS-CIFAR10 show that our TAB method outperforms other state-of-the-art methods.", "type": "Poster", @@ -29828,7 +29828,7 @@ "id": 17994, "title": "Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making", "authors": [ - "Aliyah Hsu", + "Aliyah R. Hsu", "Yeshwanth Cherapanamjeri", "Briton Park", "Tristan Naumann", @@ -29854,7 +29854,7 @@ "Aojun Zhou", "Pan Lu", "Hongsheng Li", - "Gao Peng", + "Peng Gao", "Yu Qiao" ], "abstract": "With the rising tide of large language models (LLMs), there has been a growing interest in developing general-purpose instruction-following models, e.g., ChatGPT. To this end, we present LLaMA-Adapter, a lightweight adaption method for efficient instruction tuning of LLaMA. Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model, and costs less than one hour for fine-tuning. Specifically, a zero-initialized attention mechanism is proposed. It adopts a learnable zero gating to adaptively inject the instructional cues into LLaMA within self-attention layers, contributing to a stable training process and superior final performance. In this way, LLaMA-Adapter can generate high-quality responses to diverse language instructions, comparable to Alpaca with fully fine-tuned 7B parameters. Besides language commands, by incorporating an image encoder, our approach can be simply extended to a multi-modal LLM for image-conditioned instruction following, which achieves superior multi-modal reasoning capacity on several popular benchmarks (MME, MMBench, LVLM-eHub). Furthermore, we also verify the proposed zero-initialized attention mechanism for fine-tuning other pre-trained models (ViT, RoBERTa, CLIP) on traditional vision and language tasks, demonstrating the effectiveness and generalizability of our approach.", @@ -29909,9 +29909,9 @@ "Faeze Brahman", "Chandra Bhagavatula", "Valentina Pyatkin", - "Jena Hwang", + "Jena D. Hwang", "Xiang Lorraine Li", - "Hirona Arai", + "Hirona Jacqueline Arai", "Soumya Sanyal", "Keisuke Sakaguchi", "Xiang Ren", @@ -29975,7 +29975,7 @@ "authors": [ "Xiaxia Wang", "David Jaime Tena Cucala", - "Bernardo Grau", + "Bernardo Cuenca Grau", "Ian Horrocks" ], "abstract": "There is increasing interest in methods for extracting interpretable rules from ML models trained to solve a wide range of tasks over knowledge graphs (KGs), such as KG completion, node classification, question answering and recommendation. Many such approaches, however, lack formal guarantees establishing the precise relationship between the model and the extracted rules, and this lack of assurance becomes especially problematic when the extracted rules are applied in safety-critical contexts or to ensure compliance with legal requirements. Recent research has examined whether the rules derived from the influential Neural-LP model exhibit soundness (or completeness), which means that the results obtained by applying the model to any dataset always contain (or are contained in) the results obtained by applying the rules to the same dataset. In this paper, we extend this analysis to the context of DRUM, an approach that has demonstrated superior practical performance. After observing that the rules currently extracted from a DRUM model can be unsound and/or incomplete, we propose a novel algorithm where the output rules, expressed in an extension of Datalog, ensure both soundness and completeness. This algorithm, however, can be inefficient in practice and hence we propose additional constraints to DRUM models facilitating rule extraction, albeit at the expense of reduced expressive power.", @@ -29991,10 +29991,10 @@ "id": 18256, "title": "Human Motion Diffusion as a Generative Prior", "authors": [ - "Yonatan Shafir", + "Yoni Shafir", "Guy Tevet", "Roy Kapon", - "Amit Bermano" + "Amit Haim Bermano" ], "abstract": "Recent work has demonstrated the significant potential of denoising diffusion modelsfor generating human motion, including text-to-motion capabilities.However, these methods are restricted by the paucity of annotated motion data,a focus on single-person motions, and a lack of detailed control.In this paper, we introduce three forms of composition based on diffusion priors:sequential, parallel, and model composition.Using sequential composition, we tackle the challenge of long sequencegeneration. We introduce DoubleTake, an inference-time method with whichwe generate long animations consisting of sequences of prompted intervalsand their transitions, using a prior trained only for short clips.Using parallel composition, we show promising steps toward two-person generation.Beginning with two fixed priors as well as a few two-person training examples, we learn a slimcommunication block, ComMDM, to coordinate interaction between the two resulting motions.Lastly, using model composition, we first train individual priorsto complete motions that realize a prescribed motion for a given joint.We then introduce DiffusionBlending, an interpolation mechanism to effectively blend severalsuch models to enable flexible and efficient fine-grained joint and trajectory-level control and editing.We evaluate the composition methods using an off-the-shelf motion diffusion model,and further compare the results to dedicated models trained for these specific tasks.", "type": "Poster", @@ -30103,7 +30103,7 @@ "title": "ASMR: Activation-Sharing Multi-Resolution Coordinate Networks for Efficient Inference", "authors": [ "Jason Chun Lok Li", - "Steven Luo", + "Steven Tin Sui Luo", "Le Xu", "Ngai Wong" ], @@ -30248,8 +30248,8 @@ "id": 17969, "title": "Accelerated Sampling with Stacked Restricted Boltzmann Machines", "authors": [ - "Cl\u00e9ment Roussel", "Jorge Fernandez-de-Cossio-Diaz", + "Cl\u00e9ment Roussel", "Simona Cocco", "Remi Monasson" ], @@ -30293,7 +30293,7 @@ "Laurent Dinh", "Hanlin Goh", "Preetum Nakkiran", - "Joshua Susskind", + "Joshua M. Susskind", "Etai Littwin" ], "abstract": "Joint embedding (JE) architectures have emerged as a promising avenue for ac-quiring transferable data representations. A key obstacle to using JE methods,however, is the inherent challenge of evaluating learned representations withoutaccess to a downstream task, and an annotated dataset. Without efficient and re-liable evaluation, it is difficult to iterate on architectural and training choices forJE methods. In this paper, we introduce LiDAR (Linear Discriminant AnalysisRank), a metric designed to measure the quality of representations within JE archi-tectures. Our metric addresses several shortcomings of recent approaches basedon feature covariance rank by discriminating between informative and uninforma-tive features. In essence, LiDAR quantifies the rank of the Linear DiscriminantAnalysis (LDA) matrix associated with the surrogate SSL task\u2014a measure thatintuitively captures the information content as it pertains to solving the SSL task.We empirically demonstrate that LiDAR significantly surpasses naive rank basedapproaches in its predictive power of optimal hyperparameters. Our proposed cri-terion presents a more robust and intuitive means of assessing the quality of rep-resentations within JE architectures, which we hope facilitates broader adoptionof these powerful techniques in various domains.", @@ -30309,7 +30309,7 @@ "id": 18197, "title": "Project and Probe: Sample-Efficient Adaptation by Interpolating Orthogonal Features", "authors": [ - "Annie Chen", + "Annie S Chen", "Yoonho Lee", "Amrith Setlur", "Sergey Levine", @@ -30330,7 +30330,7 @@ "authors": [ "Zhilin Huang", "Ling Yang", - "Nick Zhou", + "Xiangxin Zhou", "Zhilong Zhang", "Wentao Zhang", "Xiawu Zheng", @@ -30424,7 +30424,7 @@ "title": "Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning", "authors": [ "Kostadin Garov", - "Dimitar I. Dimitrov", + "Dimitar Iliev Dimitrov", "Nikola Jovanovi\u0107", "Martin Vechev" ], @@ -30446,7 +30446,7 @@ "Ronen Katsir", "Noam Wies", "Ayana Shenhav", - "Yael Ben-Oren", + "Yael Sapir Ben-Oren", "David Zar", "Oren Tadmor", "Jacob Bitterman", @@ -30499,7 +30499,7 @@ }, { "id": 18177, - "title": "The Alignment Problem from a Deep Learning Perspective: A Position Paper", + "title": "The Alignment Problem from a Deep Learning Perspective", "authors": [ "Richard Ngo", "Lawrence Chan", @@ -30561,7 +30561,7 @@ "title": "Towards image compression with perfect realism at ultra-low bitrates", "authors": [ "Marlene Careil", - "Matthew J Muckley", + "Matthew J. Muckley", "Jakob Verbeek", "St\u00e9phane Lathuili\u00e8re" ], @@ -30597,7 +30597,7 @@ "authors": [ "Xihaier Luo", "Wei Xu", - "Balasubramanya T. Nadiga", + "Balu Nadiga", "Yihui Ren", "Shinjae Yoo" ], @@ -30617,10 +30617,10 @@ "Federico Bianchi", "Mirac Suzgun", "Giuseppe Attanasio", - "Paul R\u00f6ttger", + "Paul Rottger", "Dan Jurafsky", "Tatsunori Hashimoto", - "James Y Zou" + "James Zou" ], "abstract": "Training large language models to follow instructions makes them perform better on a wide range of tasks and generally become more helpful. However, a perfectly helpful model will follow even the most malicious instructions and readily generate harmful content.In this paper, we raise concerns over the safety of models that only emphasize helpfulness, not harmlessness, in their instruction-tuning.We show that several popular instruction-tuned models are highly unsafe. Moreover, we show that adding just 3\\% safety examples (a few hundred demonstrations) when fine-tuning a model like LLaMA can substantially improve its safety. Our safety-tuning does not make models significantly less capable or helpful as measured by standard benchmarks. However, we do find exaggerated safety behaviours, where too much safety-tuning makes models refuse perfectly safe prompts if they superficially resemble unsafe ones. As a whole, our results illustrate trade-offs in training LLMs to be helpful and training them to be safe.", "type": "Poster", @@ -30639,7 +30639,7 @@ "Shiwei Li", "Yuanxun Lu", "Tian Fang", - "David McKinnon", + "David Neil McKinnon", "Yanghai Tsin", "Long Quan", "Yao Yao" @@ -30657,10 +30657,10 @@ "id": 17956, "title": "Estimating Shape Distances on Neural Representations with Limited Samples", "authors": [ - "Dean Pospisil", - "Brett Larsen", - "Sarah Harvey", - "Alex Williams" + "Dean A Pospisil", + "Brett W. Larsen", + "Sarah E Harvey", + "Alex H Williams" ], "abstract": "Measuring geometric similarity between high-dimensional network representations is a topic of longstanding interest to neuroscience and deep learning. Although many methods have been proposed, only a few works have rigorously analyzed their statistical efficiency or quantified estimator uncertainty in data-limited regimes. Here, we derive upper and lower bounds on the worst-case convergenceof standard estimators of shape distance\u2014a measure of representational dissimilarity proposed by Williams et al. (2021). These bounds reveal the challenging nature of the problem in high-dimensional feature spaces. To overcome these challenges, we introduce a novel method-of-moments estimator with a tunable bias-variance tradeoff parameterized by an upper bound on bias. We show that this estimator achieves superior performance to standard estimators in simulation and on neural data, particularly in high-dimensional settings. Our theoretical work and estimator thus respectively define and dramatically expand the scope of neural data for which geometric similarity can be accurately measured.", "type": "Poster", @@ -30735,9 +30735,9 @@ "authors": [ "Shunyu Yao", "Howard Chen", - "Austin Hanjie", + "Austin W. Hanjie", "Runzhe Yang", - "Karthik Narasimhan" + "Karthik R Narasimhan" ], "abstract": "Text generation under constraints have seen increasing interests in natural language processing, especially with the rapidly improving capabilities of large language models. However, existing benchmarks for constrained generation usually focus on fixed constraint types (e.g. generate a sentence containing certain words) that have proved to be easy for state-of-the-art models like GPT-4. We present COLLIE, a grammar-based framework that allows the specification of rich, compositional constraints with diverse generation levels (word, sentence, paragraph, passage) and modeling challenges (e.g. language understanding, logical reasoning, counting, semantic planning). We also develop tools for automatic extraction of task instances given a constraint structure and a raw text corpus. Using COLLIE, we compile the COLLIE-v1 dataset with 1,132 instances comprising 13 constraint structures. We perform systematic experiments across five state-of-the-art instruction-tuned language models and analyze their performances to reveal shortcomings. COLLIE is designed to be extensible and lightweight, and we hope the community finds it useful to develop more complex constraints and evaluations in the future.", "type": "Poster", @@ -30756,7 +30756,7 @@ "Julian Cremer", "Frank Noe", "Djork-Arn\u00e9 Clevert", - "Kristof T. Sch\u00fctt" + "Kristof T Sch\u00fctt" ], "abstract": "Deep generative diffusion models are a promising avenue for 3D $\\textit{de novo}$ molecular design in material science and drug discovery. However, their utility is still constrained by suboptimal performance with large molecular structures and limited training data. Addressing this gap, we explore the design space of E(3) equivariant diffusion models, focusing on previously blank spots. Our extensive comparative analysis evaluates the interplay between continuous and discrete state spaces. Out of this investigation, we introduce the EQGAT-diff model, which consistently surpasses the performance of established models on the QM9 and GEOM-Drugs datasets by a large margin.Distinctively, EQGAT-diff takes continuous atomic positions while chemical elements and bond types are categorical and employ a time-dependent loss weighting that significantly increases training convergence and the quality of generated samples.To further strengthen the applicability of diffusion models to limited training data, we examine the transferability of EQGAT-diff trained on the large PubChem3D dataset with implicit hydrogens to target distributions with explicit hydrogens. Fine-tuning EQGAT-diff for a couple of iterations further pushes state-of-the-art performance across datasets.We envision that our findings will find applications in structure-based drug design, where the accuracy of generative models for small datasets of complex molecules is critical.", "type": "Poster", @@ -30790,8 +30790,8 @@ "authors": [ "Guanhua Wang", "Heyang Qin", - "Sam Jacobs", - "Xiaoxia (Shirley) Wu", + "Sam Ade Jacobs", + "Xiaoxia Wu", "Connor Holmes", "Zhewei Yao", "Samyam Rajbhandari", @@ -30828,7 +30828,7 @@ }, { "id": 17945, - "title": "Unlock Predictable Scaling from Emergent Abilities", + "title": "Predicting Emergent Abilities with Infinite Resolution Evaluation", "authors": [ "Shengding Hu", "Xin Liu", @@ -30854,7 +30854,7 @@ }, { "id": 17946, - "title": "Multi-Scale Representations by Varing Window Attention for Semantic Segmentation", + "title": "Multi-Scale Representations by Varying Window Attention for Semantic Segmentation", "authors": [ "Haotian Yan", "Ming Wu", @@ -30880,7 +30880,7 @@ "Alexey Naumov", "Pierre Perrault", "Michal Valko", - "Pierre M\u00e9nard" + "Pierre Menard" ], "abstract": "Incorporating expert demonstrations has empirically helped to improve the sample efficiency of reinforcement learning (RL). This paper quantifies theoretically to what extent this extra information reduces RL's sample complexity. Precisely, we study the demonstration-regularized reinforcement learning framework that leverages the expert demonstrations by $\\mathrm{KL}$-regularization for a policy learned by behavior cloning. Our findings reveal that utilizing $N^{\\mathrm{E}}$ expert demonstrations enables the identification of an optimal policy at a sample complexity of order $\\widetilde{\\mathcal{O}}(\\mathrm{Poly}(S,A,H)/(\\varepsilon^2 N^{\\mathrm{E}}))$ in finite and $\\widetilde{\\mathcal{O}}(\\mathrm{Poly}(d,H)/(\\varepsilon^2 N^{\\mathrm{E}}))$ in linear Markov decision processes, where $\\varepsilon$is the target precision, $H$ the horizon, $A$ the number of action, $S$ the number of states in the finite case and $d$ the dimension of the feature space in the linear case. As a by-product, we provide tight convergence guarantees for the behaviour cloning procedure under general assumptions on the policy classes. Additionally, we establish that demonstration-regularized methods are provably efficient for reinforcement learning from human feedback (RLHF). In this respect, we provide theoretical evidence showing the benefits of KL-regularization for RLHF in tabular and linear MDPs. Interestingly, we avoid pessimism injection by employing computationally feasible regularization to handle reward estimation uncertainty, thus setting our approach apart from the prior works.", "type": "Poster", @@ -30918,7 +30918,7 @@ "authors": [ "Gourav Datta", "Zeyu Liu", - "Peter Beerel" + "Peter Anthony Beerel" ], "abstract": "Binary Neural networks (BNN) have emerged as an attractive computing paradigm for a wide range of low-power vision tasks. However, state-of-the-art (SOTA) BNNs do not yield any sparsity, and induce a significant number of non-binary operations. On the other hand, activation sparsity can be provided by spiking neural networks (SNN), that too have gained significant traction in recent times. Thanks to this sparsity, SNNs when implemented on neuromorphic hardware, have the potential to be significantly more power-efficient compared to traditional artifical neural networks (ANN). However, SNNs incur multiple time steps to achieve close to SOTA accuracy. Ironically, this increases latency and energy---costs that SNNs were proposed to reduce---and presents itself as a major hurdle in realizing SNNs\u2019 theoretical gains in practice. This raises an intriguing question: *Can we obtain SNN-like sparsity and BNN-like accuracy and enjoy the energy-efficiency benefits of both?* To answer this question, in this paper, we present a training framework for sparse binary activation neural networks (BANN) using a novel variant of the Hoyer regularizer. We estimate the threshold of each BANN layer as the Hoyer extremum of a clipped version of its activation map, where the clipping value is trained using gradient descent with our Hoyer regularizer. This approach shifts the activation values away from the threshold, thereby mitigating the effect of noise that can otherwise degrade the BANN accuracy. Our approach outperforms existing BNNs, SNNs, and adder neural networks (that also avoid energy-expensive multiplication operations similar to BNNs and SNNs) in terms of the accuracy-FLOPs trade-off for complex image recognition tasks. Downstream experiments on object detection further demonstrate the efficacy of our approach. Lastly, we demonstrate the portability of our approach to SNNs with multiple time steps. Codes are publicly available [here](https://github.com/godatta/Ultra-Low-Latency-SNN).", "type": "Poster", @@ -30953,7 +30953,7 @@ "authors": [ "Christopher Fifty", "Dennis Duan", - "Ronald Junkins", + "Ronald Guenther Junkins", "Ehsan Amid", "Jure Leskovec", "Christopher Re", @@ -30990,7 +30990,7 @@ }, { "id": 17938, - "title": "Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting", + "title": "Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting", "authors": [ "Peng Chen", "Yingying ZHANG", @@ -31145,7 +31145,7 @@ "Yachao Zhang", "Yulun Zhang", "Chenyu You", - "zhenhua guo", + "Zhenhua Guo", "Xiu Li", "Martin Danelljan", "Fisher Yu" @@ -31161,7 +31161,7 @@ }, { "id": 17929, - "title": "Efficient Network Embedding in the Exponentially Large Quantum Hilbert Space: A High-Dimensional Perspective on Embedding", + "title": "Node2ket: Efficient High-Dimensional Network Embedding in Quantum Hilbert Space", "authors": [ "Hao Xiong", "Yehui Tang", @@ -31184,7 +31184,7 @@ "authors": [ "Aliz\u00e9e Pace", "Hugo Y\u00e8che", - "Bernhard Schoelkopf", + "Bernhard Sch\u00f6lkopf", "Gunnar Ratsch", "Guy Tennenholtz" ], @@ -31242,7 +31242,7 @@ "id": 17924, "title": "Sliced Denoising: A Physics-Informed Molecular Pre-Training Method", "authors": [ - "yuyan ni", + "Yuyan Ni", "Shikun Feng", "Wei-Ying Ma", "Zhi-Ming Ma", @@ -31261,7 +31261,7 @@ "id": 17922, "title": "ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models", "authors": [ - "\u0130lker Kesen", + "Ilker Kesen", "Andrea Pedrotti", "Mustafa Dogan", "Michele Cafagna", @@ -31312,7 +31312,7 @@ "Arda Sahiner", "Tolga Ergen", "Batu Ozturkler", - "John M Pauly", + "John M. Pauly", "Morteza Mardani", "Mert Pilanci" ], @@ -31330,7 +31330,7 @@ "title": "A Dynamical View of the Question of Why", "authors": [ "Mehdi Fatemi", - "Sindhu Chatralinganadoddi Mariyappa Gowda" + "Sindhu C. M. Gowda" ], "abstract": "In this paper, we address causal reasoning in multivariate time series data generated by stochastic processes. Traditional approaches are largely restricted to static settings, ignoring the continuity and emission of variations across time. In contrast, we propose a learning paradigm that directly establishes causation between \\emph{events} in the course of time. We present two key lemmas to compute causal contributions and frame them as reinforcement learning problems. Our approach offers formal and computational tools for uncovering and quantifying causal relationships in diffusion processes, subsuming various important settings such as discrete-time Markov decision processes. Finally, in fairly intricate experiments and through sheer learning, our framework reveals and quantifies causal links, which otherwise seem inexplicable.", "type": "Poster", @@ -31404,7 +31404,7 @@ "title": "The Effect of Intrinsic Dataset Properties on Generalization: Unraveling Learning Differences Between Natural and Medical Images", "authors": [ "Nicholas Konz", - "Maciej Mazurowski" + "Maciej A Mazurowski" ], "abstract": "This paper investigates discrepancies in how neural networks learn from different imaging domains, which are commonly overlooked when adopting computer vision techniques from the domain of natural images to other specialized domains such as medical images. Recent works have found that the generalization error of a trained network typically increases with the intrinsic dimension ($d_{data}$) of its training set. Yet, the steepness of this relationship varies significantly between medical (radiological) and natural imaging domains, with no existing theoretical explanation. We address this gap in knowledge by establishing and empirically validating a generalization scaling law with respect to $d_{data}$, and propose that the substantial scaling discrepancy between the two considered domains may be at least partially attributed to the higher intrinsic ``label sharpness'' ($K_\\mathcal{F}$) of medical imaging datasets, a metric which we propose. Next, we demonstrate an additional benefit of measuring the label sharpness of a training set: it is negatively correlated with the trained model's adversarial robustness, which notably leads to models for medical images having a substantially higher vulnerability to adversarial attack. Finally, we extend our $d_{data}$ formalism to the related metric of learned representation intrinsic dimension ($d_{repr}$), derive a generalization scaling law with respect to $d_{repr}$, and show that $d_{data}$ serves as an upper bound for $d_{repr}$. Our theoretical results are supported by thorough experiments with six models and eleven natural and medical imaging datasets over a range of training set sizes. Our findings offer insights into the influence of intrinsic dataset properties on generalization, representation learning, and robustness in deep neural networks. *Code link: https://github.com/mazurowski-lab/intrinsic-properties*", "type": "Poster", @@ -31419,7 +31419,7 @@ }, { "id": 18043, - "title": "Parameter-Efficient Multi-Task Model Fusion with Partial Linearizeation", + "title": "Parameter-Efficient Multi-Task Model Fusion with Partial Linearization", "authors": [ "Anke Tang", "Li Shen", @@ -31481,8 +31481,8 @@ "authors": [ "Rasool Fakoor", "Jonas Mueller", - "Zachary Lipton", - "Pratik A Chaudhari", + "Zachary Chase Lipton", + "Pratik Chaudhari", "Alex Smola" ], "abstract": "Real-world deployment of machine learning models is challenging because data evolves over time. While no model can work when data evolves in an arbitrary fashion, if there is some pattern to these changes, we might be able to design methods to address it. This paper addresses situations when data evolves gradually. We introduce a time-varying propensity score that can detect gradual shifts in the distribution of data which allows us to selectively sample past data to update the model---not just similar data from the past like that of a standard propensity score but also data that evolved in a similar fashion in the past. The time-varying propensity score is quite general: we demonstrate different ways of implementing it and evaluate it on a variety of problems ranging from supervised learning (e.g., image classification problems) where data undergoes a sequence of gradual shifts, to reinforcement learning tasks (e.g., robotic manipulation and continuous control) where data shifts as the policy or the task changes.", @@ -31556,13 +31556,13 @@ "id": 17909, "title": "Sentence-level Prompts Benefit Composed Image Retrieval", "authors": [ - "Yang Bai", + "Yang bai", "Xinxing Xu", "Yong Liu", "Salman Khan", "Fahad Khan", "Wangmeng Zuo", - "Rick Siow Mong Mong", + "Rick Siow Mong Goh", "Chun-Mei Feng" ], "abstract": "Composed image retrieval (CIR) is the task of retrieving specific images by using a query that involves both a reference image and a relative caption. Most existing CIR models adopt the late-fusion strategy to combine visual and language features. Besides, several approaches have also been suggested to generate a pseudo-word token from the reference image, which is further integrated into the relative caption for CIR. However, these pseudo-word-based prompting methods have limitations when target image encompasses complex changes on reference image, e.g., object removal and attribute modification. In this work, we demonstrate that learning an appropriate sentence-level prompt for the relative caption (SPRC) is sufficient for achieving effective composed image retrieval. Instead of relying on pseudo- word-based prompts, we propose to leverage pretrained V-L models, e.g., BLIP-2, to generate sentence-level prompts. By concatenating the learned sentence-level prompt with the relative caption, one can readily use existing text-based image retrieval models to enhance CIR performance. Furthermore, we introduce both image-text contrastive loss and text prompt alignment loss to enforce the learning of suitable sentence-level prompts. Experiments show that our proposed method performs favorably against the state-of-the-art CIR methods on the Fashion-IQ and CIRR datasets. The source code and pretrained model will be publicly available.", @@ -31581,15 +31581,15 @@ "title": "Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning", "authors": [ "Chengxing Jia", - "Chen-Xiao Gao", + "Chenxiao Gao", "Hao Yin", "Fuxiang Zhang", - "XiongHui Chen", + "Xiong-Hui Chen", "Tian Xu", "Lei Yuan", "Zongzhang Zhang", - "Yang Yu", - "Zhi-Hua Zhou" + "Zhi-Hua Zhou", + "Yang Yu" ], "abstract": "Human beings can make adaptive decisions in a preparatory manner, i.e., by making preparations in advance, which offers significant advantages in scenarios where both online and offline experiences are expensive and limited. Meanwhile, current reinforcement learning methods commonly rely on numerous environment interactions but hardly obtain generalizable policies. In this paper, we introduce the idea of \\textit{rehearsal} into policy optimization, where the agent plans for all possible outcomes in mind and acts adaptively according to actual responses from the environment. To effectively rehearse, we propose ReDM, an algorithm that generates a diverse and eligible set of dynamics models and then rehearse the policy via adaptive training on the generated model set. Rehearsal enables the policy to make decision plans for various hypothetical dynamics and to naturally generalize to previously unseen environments. Our experimental results demonstrate that ReDM is capable of learning a valid policy solely through rehearsal, even with \\emph{zero} interaction data. We further extend ReDM to scenarios where limited or mismatched interaction data is available, and our experimental results reveal that ReDM produces high-performing policies compared to other offline RL baselines.", "type": "Poster", @@ -31682,8 +31682,8 @@ "authors": [ "Milan Papez", "Martin Rektoris", - "Tom\u00e1\u0161 Pevn\u00fd", - "Vaclav Smidl" + "Vaclav Smidl", + "Tom\u00e1\u0161 Pevn\u00fd" ], "abstract": "Daily internet communication relies heavily on tree-structured graphs, embodied by popular data formats such as XML and JSON. However, many recent generative (probabilistic) models utilize neural networks to learn a probability distribution over undirected cyclic graphs. This assumption of a generic graph structure brings various computational challenges, and, more importantly, the presence of non-linearities in neural networks does not permit tractable probabilistic inference. We address these problems by proposing sum-product-set networks, an extension of probabilistic circuits from unstructured tensor data to tree-structured graph data. To this end, we use random finite sets to reflect a variable number of nodes and edges in the graph and to allow for exact and efficient inference. We demonstrate that our tractable model performs comparably to various intractable models based on neural networks.", "type": "Poster", @@ -31763,13 +31763,13 @@ "authors": [ "Joey Bose", "Tara Akhound-Sadegh", - "Kilian FATRAS", "Guillaume Huguet", + "Kilian FATRAS", "Jarrid Rector-Brooks", - "Chenghao Liu", - "Andrei Nica", + "Cheng-Hao Liu", + "Andrei Cristian Nica", "Maksym Korablyov", - "Michael Bronstein", + "Michael M. Bronstein", "Alexander Tong" ], "abstract": "The computational design of novel protein structures has the potential to impact numerous scientific disciplines greatly. Toward this goal, we introduce \\foldflow, a series of novel generative models of increasing modeling power based on the flow-matching paradigm over $3\\mathrm{D}$ rigid motions---i.e. the group $\\mathrm{SE}(3)$---enabling accurate modeling of protein backbones. We first introduce FoldFlow-Base, a simulation-free approach to learning deterministic continuous-time dynamics and matching invariant target distributions on $\\mathrm{SE}(3)$. We next accelerate training by incorporating Riemannian optimal transport to create FoldFlow-OT, leading to the construction of both more simple and stable flows. Finally, we design FoldFlow-SFM, coupling both Riemannian OT and simulation-free training to learn stochastic continuous-time dynamics over $\\mathrm{SE}(3)$. Our family of FoldFlow, generative models offers several key advantages over previous approaches to the generative modeling of proteins: they are more stable and faster to train than diffusion-based approaches, and our models enjoy the ability to map any invariant source distribution to any invariant target distribution over $\\mathrm{SE}(3)$. Empirically, we validate our FoldFlow, models on protein backbone generation of up to $300$ amino acids leading to high-quality designable, diverse, and novel samples.", @@ -31788,7 +31788,7 @@ "Shih-Hsin Wang", "Yung-Chang Hsu", "Justin Baker", - "Andrea Bertozzi", + "Andrea L. Bertozzi", "Jack Xin", "Bao Wang" ], @@ -31953,7 +31953,7 @@ "Mustafa Shukor", "Alexandre Rame", "Corentin Dancette", - "MATTHIEU CORD" + "Matthieu Cord" ], "abstract": "Following the success of Large Language Models (LLMs), Large Multimodal Models (LMMs), such as the Flamingo model and its subsequent competitors, have started to emerge as natural step towards generalist agents. However, interacting with recent LMMs reveals major limitations that are hardly captured by the current evaluation benchmarks. Indeed, task performances (e.g., VQA accuracy) alone do not provide enough clues to understand their real capabilities, limitations, and to which extent such models are aligned to human expectations. To refine our understanding on those flaws, we deviate from the current evaluation paradigm, and (1) evaluate 8 recent open-source LMMs (based on the Flamingo architecture such as OpenFlamingo and IDEFICS) on 5 different axes; hallucinations, abstention, compositionality, explainability and instruction following. Our evaluation on these axes reveals major flaws in LMMs. To efficiently address these problems, and inspired by the success of In-Context Learning (ICL) in LLMs, (2) we explore ICL as a solution, and study how it affects these limitations. Based on our ICL study, (3) we push ICL further, and propose new multimodal ICL approaches such as; Multitask-ICL, Chain-of-Hindsight-ICL and Self-Correcting-ICL. Our findings are as follows. (1) Despite their success, LMMs have flaws that remain unsolved with scaling alone. (2) The effect of ICL on LMMs flaws is nuanced; despite its effectiveness for improved explainability, abstention and instruction following, ICL does not improve compositional abilities, and actually even amplifies hallucinations. (3) The proposed ICL variants are promising as post-hoc approaches to efficiently tackle some of those flaws. The code will be made public.", "type": "Poster", @@ -31969,7 +31969,7 @@ "title": "LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts", "authors": [ "Hanan Gani", - "Shariq Bhat", + "Shariq Farooq Bhat", "Muzammal Naseer", "Salman Khan", "Peter Wonka" @@ -31988,8 +31988,8 @@ "title": "BrainSCUBA: Fine-Grained Natural Language Captions of Visual Cortex Selectivity", "authors": [ "Andrew Luo", - "Maggie Henderson", - "Michael Tarr", + "Margaret Marie Henderson", + "Michael J. Tarr", "Leila Wehbe" ], "abstract": "Understanding the functional organization of higher visual cortex is a central focus in neuroscience. Past studies have primarily mapped the visual and semantic selectivity of neural populations using hand-selected stimuli, which may potentially bias results towards pre-existing hypotheses of visual cortex functionality. Moving beyond conventional approaches, we introduce a data-driven method that generates natural language descriptions for images predicted to maximally activate individual voxels of interest. Our method -- Semantic Captioning Using Brain Alignments (\"BrainSCUBA\") -- builds upon the rich embedding space learned by a contrastive vision-language model and utilizes a pre-trained large language model to generate interpretable captions. We validate our method through fine-grained voxel-level captioning across higher-order visual regions. We further perform text-conditioned image synthesis with the captions, and show that our images are semantically coherent and yield high predicted activations. Finally, to demonstrate how our method enables scientific discovery, we perform exploratory investigations on the distribution of \"person\" representations in the brain, and discover fine-grained semantic selectivity in body-selective areas. Unlike earlier studies that decode text, our method derives *voxel-wise captions of semantic selectivity*. Our results show that BrainSCUBA is a promising means for understanding functional preferences in the brain, and provides motivation for further hypothesis-driven investigation of visual cortex.", @@ -32025,10 +32025,10 @@ "id": 17948, "title": "Revisiting Deep Audio-Text Retrieval Through the Lens of Transportation", "authors": [ - "Tien Manh Luong", + "Manh Luong", "Khai Nguyen", "Nhat Ho", - "Reza Haffari", + "Reza Haf", "Dinh Phung", "Lizhen Qu" ], @@ -32104,7 +32104,7 @@ }, { "id": 17921, - "title": "The Optimal Constant Solution: Predictable Extrapolation in Deep Neural Networks", + "title": "Deep Neural Networks Tend To Extrapolate Predictably", "authors": [ "Katie Kang", "Amrith Setlur", @@ -32145,7 +32145,7 @@ "authors": [ "Shengjie Luo", "Tianlang Chen", - "Aditi Krishnapriyan" + "Aditi S. Krishnapriyan" ], "abstract": "Developing equivariant neural networks for the E(3) group plays a pivotal role in modeling 3D data across real-world applications. Enforcing this equivariance primarily involves the tensor products of irreducible representations (irreps). However, the computational complexity of such operations increases significantly as higher-order tensors are used. In this work, we propose a systematic approach to substantially accelerate the computation of the tensor products of irreps. We mathematically connect the commonly used Clebsch-Gordan coefficients to the Gaunt coefficients, which are integrals of products of three spherical harmonics. Through Gaunt coefficients, the tensor product of irreps becomes equivalent to the multiplication between spherical functions represented by spherical harmonics. This perspective further allows us to change the basis for the equivariant operations from spherical harmonics to a 2D Fourier basis. Consequently, the multiplication between spherical functions represented by a 2D Fourier basis can be efficiently computed via the convolution theorem and Fast Fourier Transforms. This transformation reduces the complexity of full tensor products of irreps from $\\mathcal{O}(L^6)$ to $\\mathcal{O}(L^3)$, where $L$ is the max degree of irreps. Leveraging this approach, we introduce the Gaunt Tensor Product, which serves as a new method to construct efficient equivariant operations across different model architectures. Our experiments demonstrate both the superior efficiency and improved performance of our approach on a range of tasks. The code and models will be made publicly available.", "type": "Spotlight Poster", @@ -32192,7 +32192,7 @@ }, { "id": 17881, - "title": "KW-Design: Pushing the Limit of Protein Deign via Knowledge Refinement", + "title": "KW-Design: Pushing the Limit of Protein Design via Knowledge Refinement", "authors": [ "Zhangyang Gao", "Cheng Tan", @@ -32200,7 +32200,7 @@ "Yijie Zhang", "Jun Xia", "Siyuan Li", - "Stan Z Li" + "Stan Z. Li" ], "abstract": "Recent studies have shown competitive performance in protein inverse folding, while most of them disregard the importance of predictive confidence, fail to cover the vast protein space, and do not incorporate common protein knowledge. Given the great success of pretrained models on diverse protein-related tasks and the fact that recovery is highly correlated with confidence, we wonder whether this knowledge can push the limits of protein design further. As a solution, we propose a knowledge-aware module that refines low-quality residues. We also introduce a memory-retrieval mechanism to save more than 50\\% of the training time. We extensively evaluate our proposed method on the CATH, TS50, TS500, and PDB datasets and our results show that our KW-Design method outperforms the previous PiFold method by approximately 9\\% on the CATH dataset. KW-Design is the first method that achieves 60+\\% recovery on all these benchmarks. We also provide additional analysis to demonstrate the effectiveness of our proposed method. The code will be publicly available upon acceptance.", "type": "Poster", @@ -32213,7 +32213,7 @@ }, { "id": 17880, - "title": "Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding", + "title": "Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation", "authors": [ "Xuefei Ning", "Zinan Lin", @@ -32237,7 +32237,7 @@ "authors": [ "Yuan Gao", "Rustem Islamov", - "Sebastian Stich" + "Sebastian U Stich" ], "abstract": "Modern distributed training relies heavily on communication compression to reduce the communication overhead. In this work, we study algorithms employing a popular class of contractive compressors in order to reduce communication overhead. However, the naive implementation often leads to unstable convergence or even exponential divergence due to the compression bias. Error Compensation (EC) is an extremely popular mechanism to mitigate the aforementioned issues during the training of models enhanced by contractive compression operators. Compared to the effectiveness of EC in the data homogeneous regime, the understanding of the practicality and theoretical foundations of EC in the data heterogeneous regime is limited. Existing convergence analyses typically rely on strong assumptions such as bounded gradients, bounded data heterogeneity, or large batch accesses, which are often infeasible in modern Machine Learning Applications. We resolve the majority of current issues by proposing EControl, a novel mechanism that can regulate error compensation by controlling the strength of the feedback signal. We prove fast convergence for EControl in standard strongly convex, general convex, and nonconvex settings without any additional assumptions on the problem or data heterogeneity. We conduct extensive numerical evaluations to illustrate the efficacy of our method and support our theoretical findings.", "type": "Poster", @@ -32256,7 +32256,7 @@ "Lukas Thede", "A. Sophia Koepke", "Oriol Vinyals", - "Olivier Henaff", + "Olivier J Henaff", "Zeynep Akata" ], "abstract": "Training deep networks requires various design decisions regarding for instance their architecture, data augmentation, or optimization. In this work, we find these training variations to result in networks learning unique feature sets from the data. Using public model libraries comprising thousands of models trained on canonical datasets like ImageNet, we observe that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other \u2013 independent of overall performance. Given any arbitrary pairing of pretrained models and no external rankings (such as separate test sets, e.g. due to data privacy), we investigate if it is possible to transfer such \"complementary\" knowledge from one model to another without performance degradation \u2013 a task made particularly difficult as additional knowledge can be contained in stronger, equiperformant or weaker models. Yet facilitating robust transfer in scenarios agnostic to pretrained model pairings would unlock auxiliary gains and knowledge fusion from any model repository without restrictions on model and problem specifics - including from weaker, lower-performance models. This work therefore provides an initial, in-depth exploration on the viability of such general-purpose knowledge transfer. Across large-scale experiments, we first reveal the shortcomings of standard knowledge distillation techniques, and then propose a much more general extension through data partitioning for successful transfer between nearly all pretrained models, which we show can also be done unsupervised. Finally, we assess both the scalability and impact of fundamental model properties on successful model-agnostic knowledge transfer.", @@ -32297,7 +32297,7 @@ "title": "EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations", "authors": [ "Yi-Lun Liao", - "Brandon Wood", + "Brandon M Wood", "Abhishek Das", "Tess Smidt" ], @@ -32316,7 +32316,7 @@ "authors": [ "Tianwei Ni", "Benjamin Eysenbach", - "Erfan Seyedsalehi", + "Erfan SeyedSalehi", "Michel Ma", "Clement Gehring", "Aditya Mahajan", @@ -32333,12 +32333,12 @@ }, { "id": 17888, - "title": "Improving Code Style for Accurate Code Generation", + "title": "LLM-Assisted Code Cleaning For Training Accurate Code Generators", "authors": [ "Naman Jain", "Tianjun Zhang", "Wei-Lin Chiang", - "Joseph E Gonzalez", + "Joseph E. Gonzalez", "Koushik Sen", "Ion Stoica" ], @@ -32399,7 +32399,7 @@ "Han Chen", "Gagandeep Singh", "Volodymyr Kindratenko", - "Eliu Huerta", + "Eliu A Huerta", "Kibaek Kim", "Ravi Madduri" ], @@ -32414,7 +32414,7 @@ }, { "id": 17876, - "title": "Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis", + "title": "Mega-TTS 2: Boosting Prompting Mechanisms for Zero-Shot Speech Synthesis", "authors": [ "Ziyue Jiang", "Jinglin Liu", @@ -32463,12 +32463,12 @@ "authors": [ "Niklas Muennighoff", "Qian Liu", - "Armel Zebaze", + "Armel Randy Zebaze", "Qinkai Zheng", "Binyuan Hui", "Terry Yue Zhuo", "Swayam Singh", - "xiangru tan\u2006h", + "Xiangru Tang", "Leandro Von Werra", "Shayne Longpre" ], @@ -32508,7 +32508,7 @@ "authors": [ "Alessandro De Palma", "Rudy R Bunel", - "Krishnamurthy Dvijotham", + "Krishnamurthy Dj Dvijotham", "M. Pawan Kumar", "Robert Stanforth", "Alessio Lomuscio" @@ -32533,7 +32533,7 @@ "Weihua Du", "Hongxin Zhang", "Yilun Du", - "Joshua B Tenenbaum", + "Joshua B. Tenenbaum", "Chuang Gan" ], "abstract": "Recent advances in high-fidelity virtual environments serve as one of the major driving forces for building intelligent embodied agents to perceive, reason and interact with the physical world. Typically, these environments remain unchanged unless agents interact with them. However, in real-world scenarios, agents might also face dynamically changing environments characterized by unexpected events and need to rapidly take action accordingly. To remedy this gap, we propose a new simulated embodied benchmark, called HAZARD, specifically designed to assess the decision-making abilities of embodied agents in dynamic situations. HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind, and specifically supports the utilization of large language models (LLMs) to assist common sense reasoning and decision-making. This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines, including reinforcement learning (RL), rule-based, and search-based methods in dynamically changing environments. As a first step toward addressing this challenge using large language models, we further develop an LLM-based agent and perform an in-depth analysis of its promise and challenge of solving these challenging tasks.", @@ -32549,9 +32549,9 @@ "id": 17871, "title": "Mayfly: a Neural Data Structure for Graph Stream Summarization", "authors": [ - "yuan feng", + "Yuan Feng", "Yukun Cao", - "Hairu Wang", + "Wang Hairu", "Xike Xie", "S Kevin Zhou" ], @@ -32568,7 +32568,7 @@ "id": 17870, "title": "The Consensus Game: Language Model Generation via Equilibrium Search", "authors": [ - "Athul Jacob", + "Athul Paul Jacob", "Yikang Shen", "Gabriele Farina", "Jacob Andreas" @@ -32588,7 +32588,7 @@ "authors": [ "Yavuz Faruk Bakman", "Duygu Nur Yaldiz", - "Yahya Ezzeldin", + "Yahya H. Ezzeldin", "Salman Avestimehr" ], "abstract": "Federated Learning (FL) has gained significant attraction due to its ability to enable privacy-preserving training over decentralized data. Current literature in FL mostly focuses on single-task learning. However, over time, new tasks may appear in the clients and the global model should learn these tasks without forgetting previous tasks. This real-world scenario is known as Continual Federated Learning (CFL). The main challenge of CFL is \\textit{Global Catastrophic Forgetting}, which corresponds to the fact that when the global model is trained on new tasks, its performance on old tasks decreases. There have been a few recent works on CFL to propose methods that aim to address the global catastrophic forgetting problem. However, these works either have unrealistic assumptions on the availability of past data samples or violate the privacy principles of FL. We propose a novel method, Federated Orthogonal Training (FOT), to overcome these drawbacks and address the global catastrophic forgetting in CFL. Our algorithm extracts the global input subspace of each layer for old tasks and modifies the aggregated updates of new tasks such that they are orthogonal to the global principal subspace of old tasks for each layer. This decreases the interference between tasks, which is the main cause for forgetting. % Our method is almost computation-free on the client side and has negligible communication cost. We empirically show that FOT outperforms state-of-the-art continual learning methods in the CFL setting, achieving an average accuracy gain of up to 15% with 27% lower forgetting while only incurring a minimal computation and communication cost.", @@ -32602,9 +32602,9 @@ }, { "id": 17868, - "title": "Self-Supervised Contrastive Forecasting", + "title": "Self-Supervised Contrastive Learning for Long-term Forecasting", "authors": [ - "JUNWOO PARK", + "Junwoo Park", "Daehoon Gwak", "Jaegul Choo", "Edward Choi" @@ -32639,7 +32639,7 @@ "id": 17863, "title": "Structural Estimation of Partially Observed Linear Non-Gaussian Acyclic Model: A Practical Approach with Identifiability", "authors": [ - "songyao jin", + "Songyao Jin", "Feng Xie", "Guangyi Chen", "Biwei Huang", @@ -32714,7 +32714,7 @@ "id": 17858, "title": "FedInverse: Evaluating Privacy Leakage in Federated Learning", "authors": [ - "DI WU", + "Di Wu", "Jun Bai", "Yiliao Song", "Junjun Chen", @@ -32738,7 +32738,7 @@ "Zehao Dong", "Muhan Zhang", "Philip Payne", - "Michael Province", + "Michael A Province", "Carlos Cruchaga", "Tianyu Zhao", "Fuhai Li", @@ -32777,11 +32777,11 @@ "Xin Wang", "Hua Xu", "Qianrui Zhou", + "Kai Gao", "Jianhua Su", "jinyue Zhao", "Wenrui Li", - "Yanting Chen", - "Kai Gao" + "Yanting Chen" ], "abstract": "Multimodal intent recognition poses significant challenges, requiring the incorporation of non-verbal modalities from real-world contexts to enhance the comprehension of human intentions. However, most existing multimodal intent benchmark datasets are limited in scale and suffer from difficulties in handling out-of-scope samples that arise in multi-turn conversational interactions. In this paper, we introduce MIntRec2.0, a large-scale benchmark dataset for multimodal intent recognition in multi-party conversations. It contains 1,245 high-quality dialogues with 15,040 samples, each annotated within a new intent taxonomy of 30 fine-grained classes, across text, video, and audio modalities. In addition to more than 9,300 in-scope samples, it also includes over 5,700 out-of-scope samples appearing in multi-turn contexts, which naturally occur in real-world open scenarios, enhancing its practical applicability. Furthermore, we provide comprehensive information on the speakers in each utterance, enriching its utility for multi-party conversational research. We establish a general framework supporting the organization of single-turn and multi-turn dialogue data, modality feature extraction, multimodal fusion, as well as in-scope classification and out-of-scope detection. Evaluation benchmarks are built using classic multimodal fusion methods, ChatGPT, and human evaluators. While existing methods incorporating nonverbal information yield improvements, effectively leveraging context information and detecting out-of-scope samples remains a substantial challenge. Notably, powerful large language models exhibit a significant performance gap compared to humans, highlighting the limitations of machine learning methods in the advanced cognitive intent understanding task. We believe that MIntRec2.0 will serve as a valuable resource, providing a pioneering foundation for research in human-machine conversational interactions, and significantly facilitating related applications.", "type": "Poster", @@ -32896,8 +32896,8 @@ "title": "Analysis of Learning a Flow-based Generative Model from Limited Sample Complexity", "authors": [ "Hugo Cui", - "Eric Vanden-Eijnden", "Florent Krzakala", + "Eric Vanden-Eijnden", "Lenka Zdeborova" ], "abstract": "We study the problem of training a flow-based generative model, parametrized by a two-layer autoencoder, to sample from a high-dimensional Gaussian mixture. We provide a sharp end-to-end analysis of the problem. First, we provide a tight closed-form characterization of the learnt velocity field, when parametrized by a shallow denoising auto-encoder trained on a finite number $n$ of samples from the target distribution. Building on this analysis, we provide a sharp description of the corresponding generative flow, which pushes the base Gaussian density forward to an approximation of the target density. In particular, we provide closed-form formulae for the distance between the means of the generated mixture and the mean of the target mixture, which we show decays as $\\Theta_n(\\frac{1}{n})$. Finally, this rate is shown to be in fact Bayes-optimal.", @@ -32983,7 +32983,7 @@ "title": "Interpretable Meta-Learning of Physical Systems", "authors": [ "Matthieu Blanke", - "marc lelarge" + "Marc Lelarge" ], "abstract": "Machine learning methods can be a valuable aid in the scientific process, but they need to face challenging settings where data come from inhomogeneous experimental conditions. Recent meta-learning methods have made significant progress in multi-task learning, but they rely on black-box neural networks, resulting in high computational costs and limited interpretability. We introduce CAMEL, a new meta-learning architecture capable of learning efficiently from multiple environments, with an affine structure with respect to the learning task. We prove that CAMEL can identify the physical parameters of the system, enabling interpreable learning. We demonstrate the competitive generalization performance and the low computational cost of our method by comparing it to state-of-the-art algorithms on physical systems, ranging from toy models to complex, non-analytical systems. The interpretability of our method is illustrated with original applications to physical-parameter-induced adaptation and to adaptive control and system identification.", "type": "Poster", @@ -33092,8 +33092,8 @@ "id": 17833, "title": "MaGIC: Multi-modality Guided Image Completion", "authors": [ - "Yongsheng Yu", "Hao Wang", + "Yongsheng Yu", "Tiejian Luo", "Heng Fan", "Libo Zhang" @@ -33164,7 +33164,7 @@ "id": 17829, "title": "Transformer-VQ: Linear-Time Transformers via Vector Quantization", "authors": [ - "Lucas D. Lingle" + "Lucas Dax Lingle" ], "abstract": "We introduce Transformer-VQ, a decoder-only transformer computing softmax-based dense self-attention in linear time. Transformer-VQ's efficient attention is enabled by vector-quantized keys and a novel caching mechanism. In large-scale experiments, Transformer-VQ is shown highly competitive in quality, with strong results on Enwik8 (0.99 bpb), PG-19 (26.6 ppl), and ImageNet64 (3.16 bpb). Code: https://github.com/transformer-vq/transformer_vq", "type": "Poster", @@ -33182,7 +33182,7 @@ "Zeyu Liu", "Gourav Datta", "Anni Li", - "Peter Beerel" + "Peter Anthony Beerel" ], "abstract": "Transformer models have demonstrated high accuracy in numerous applications but have high complexity and lack sequential processing capability making them ill-suited for many streaming applications at the edge where devices are heavily resource-constrained. Thus motivated, many researchers have proposed reformulating the transformer models as RNN modules which modify the self-attention computation with explicit states. However, these approaches often incur significant performance degradation.The ultimate goal is to develop a model that has the following properties: parallel training, streaming and low-cost inference, and state-of-the-art (SOTA) performance. In this paper, we propose a new direction to achieve this goal. We show how architectural modifications to a fully-sequential recurrent model can help push its performance toward Transformer models while retaining its sequential processing capability. Specifically, inspired by the recent success of Legendre Memory Units (LMU) in sequence learning tasks, we propose LMUFormer, which augments the LMU with convolutional patch embedding and convolutional channel mixer. Moreover, we present a spiking version of this architecture, which introduces the benefit of states within the patch embedding and channel mixer modules while simultaneously reducing the computing complexity. We evaluated our architectures on multiple sequence datasets. Of particular note is our performance on the Speech Commands V2 dataset (35 classes). In comparison to SOTA transformer-based models within the ANN domain, our LMUFormer demonstrates comparable performance while necessitating a remarkable $70\\times$ reduction in parameters and a substantial $140\\times$ decrement in FLOPs. Furthermore, when benchmarked against extant low-complexity SNN variants, our model establishes a new SOTA with an accuracy of 96.12\\%. Additionally, owing to our model's proficiency in real-time data processing, we are able to achieve a 32.03\\% reduction in sequence length, all while incurring an inconsequential decline in performance.", "type": "Poster", @@ -33200,7 +33200,7 @@ "title": "WebArena: A Realistic Web Environment for Building Autonomous Agents", "authors": [ "Shuyan Zhou", - "Frank F Xu", + "Frank F. Xu", "Hao Zhu", "Xuhui Zhou", "Robert Lo", @@ -33246,7 +33246,7 @@ "authors": [ "Qiying Yu", "Yudi Zhang", - "yuyan ni", + "Yuyan Ni", "Shikun Feng", "Yanyan Lan", "Hao Zhou", @@ -33269,7 +33269,7 @@ "Yanrong Ji", "Weijian Li", "Pratik Dutta", - "Ramana Davuluri", + "Ramana V Davuluri", "Han Liu" ], "abstract": "Decoding the linguistic intricacies of the genome is a crucial problem in biology, and pre-trained foundational models such as DNABERT and Nucleotide Transformer have made significant strides in this area. Existing works have largely hinged on k-mer, fixed-length permutations of A, T, C, and G, as the token of the genome language due to its simplicity. However, we argue that the computation and sample inefficiencies introduced by k-mer tokenization are primary obstacles in developing large genome foundational models. We provide conceptual and empirical insights into genome tokenization, building on which we propose to replace k-mer tokenization with Byte Pair Encoding (BPE), a statistics-based data compression algorithm that constructs tokens by iteratively merging the most frequent co-occurring genome segment in the corpus. We demonstrate that BPE not only overcomes the limitations of k-mer tokenization but also benefits from the computational efficiency of non-overlapping tokenization.Based on these insights, we introduce DNABERT-2, a refined genome foundation model that adapts an efficient tokenizer and employs multiple strategies to overcome input length constraints, reduce time and memory expenditure, and enhance model capability. Furthermore, we identify the absence of a comprehensive and standardized benchmark for genome understanding as another significant impediment to fair comparative analysis. In response, we propose the Genome Understanding Evaluation (GUE), a comprehensive multi-species genome classification dataset that amalgamates $28$ distinct datasets across $7$ tasks, with input lengths ranging from $70$ to $1000$. Through comprehensive experiments on the GUE benchmark, we demonstrate that DNABERT-2 achieves comparable performance to the state-of-the-art model with $21 \\times$ fewer parameters and approximately $92 \\times$ less GPU time in pre-training. Compared to DNABERT, while being $3 \\times$ more efficient, DNABERT-2 outperforms it on $23$ out of $28$ datasets, with an average improvement of $6$ absolute scores on GUE.The code, data, and pre-trained model will be publicly available.", @@ -33358,7 +33358,7 @@ "Deng Cai", "Leyang Cui", "Xuxin Cheng", - "Wei BI", + "Wei Bi", "Yuexian Zou", "Shuming Shi" ], @@ -33399,8 +33399,8 @@ "Szymon Tworkowski", "Szymon Antoniak", "Bartosz Piotrowski", - "Qiaochu Jiang", - "Jin Zhou", + "Albert Q. Jiang", + "Jin Peng Zhou", "Christian Szegedy", "\u0141ukasz Kuci\u0144ski", "Piotr Mi\u0142o\u015b", @@ -33445,9 +33445,9 @@ "authors": [ "Xinyu Tang", "Richard Shin", - "Huseyin Inan", + "Huseyin A Inan", "Andre Manoel", - "Niloofar Mireshghallah", + "Fatemehsadat Mireshghallah", "Zinan Lin", "Sivakanth Gopi", "Janardhan Kulkarni", @@ -33467,8 +33467,8 @@ "title": "LDReg: Local Dimensionality Regularized Self-Supervised Learning", "authors": [ "Hanxun Huang", - "Ricardo Campello", - "Sarah Erfani", + "Ricardo J. G. B. Campello", + "Sarah Monazam Erfani", "Xingjun Ma", "Michael E. Houle", "James Bailey" @@ -33544,7 +33544,7 @@ "id": 17807, "title": "Copula Conformal prediction for multi-step time series prediction", "authors": [ - "Sophia Sun", + "Sophia Huiwen Sun", "Rose Yu" ], "abstract": "Accurate uncertainty measurement is a key step to building robust and reliable machine learning systems. Conformal prediction is a distribution-free uncertainty quantification algorithm popular for its ease of implementation, statistical coverage guarantees, and versatility for underlying forecasters. However, existing conformal prediction algorithms for time series are limited to single-step prediction without considering the temporal dependency. In this paper we propose the Copula Conformal Prediction algorithm for multivariate, multi-step Time Series forecasting, CopulaCPTS. We prove that CopulaCPTS has finite sample validity guarantee. On four synthetic and real-world multivariate time series datasets, we show that CopulaCPTS produces more calibrated and efficient confidence intervals for multi-step prediction tasks than existing techniques.", @@ -33580,7 +33580,7 @@ "id": 19725, "title": "ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models", "authors": [ - "Iman Mirzadeh", + "Seyed Iman Mirzadeh", "Keivan Alizadeh-Vahid", "Sachin Mehta", "Carlo C del Mundo", @@ -33603,10 +33603,10 @@ "title": "Prototypical Information Bottlenecking and Disentangling for Multimodal Cancer Survival Prediction", "authors": [ "Yilan Zhang", - "Yingxue XU", + "Yingxue Xu", "Jianqi Chen", "Fengying Xie", - "Hao CHEN" + "Hao Chen" ], "abstract": "Multimodal learning significantly benefits cancer survival prediction, especially the integration of pathological images and genomic data. Despite advantages of multimodal learning for cancer survival prediction, massive redundancy in multimodal data prevents it from extracting discriminative and compact information: (1) An extensive amount of intra-modal task-unrelated information blurs discriminability, especially for gigapixel whole slide images (WSIs) with many patches in pathology and thousands of pathways in genomic data, leading to an \"intra-modal redundancy\" issue. (2) Duplicated information among modalities dominates the representation of multimodal data, which makes modality-specific information prone to being ignored, resulting in an \"inter-modal redundancy\" issue. To address these, we propose a new framework, Prototypical Information Bottlenecking and Disentangling (PIBD), consisting of Prototypical Information Bottleneck (PIB) module for intra-modal redundancy and Prototypical Information Disentanglement (PID) module for inter-modal redundancy. Specifically, a variant of information bottleneck, PIB, is proposed to model prototypes approximating a bunch of instances for different risk levels, which can be used for selection of discriminative instances within modality. PID module decouples entangled multimodal data into compact distinct components: modality-common and modality-specific knowledge, under the guidance of the joint prototypical distribution. Extensive experiments on five cancer benchmark datasets demonstrated our superiority over other methods. The code is released.", "type": "Spotlight Poster", @@ -33664,7 +33664,7 @@ "title": "Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML", "authors": [ "Robin van de Water", - "Hendrik Schmidt", + "Hendrik Nils Aurel Schmidt", "Paul Elbers", "Patrick Thoral", "Bert Arnrich", @@ -33688,7 +33688,7 @@ "Weiyu Sun", "Xinyu Zhang", "Hao LU", - "YINGCONG CHEN", + "Ying-Cong Chen", "Ting Wang", "Jinghui Chen", "Lu Lin" @@ -33704,11 +33704,11 @@ }, { "id": 17798, - "title": "The Truth Is In There: Improving Reasoning with Layer-Selective Rank Reduction", + "title": "The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction", "authors": [ "Pratyusha Sharma", - "Jordan Ash", - "Dipendra Kumar Misra" + "Jordan T. Ash", + "Dipendra Misra" ], "abstract": "Transformer-based Large Language Models (LLMs) have become a fixture in modern machine learning. Correspondingly, significant resources are allocated towards research that aims to further advance this technology, typically resulting in models of increasing size that are trained on increasing amounts of data. This work, however, demonstrates the surprising result that it is often possible to im-prove the performance of LLMs by simply removing higher-order components of their constituent weight matrices in the multi-layer perception (MLP) layers. This simple intervention, which we call LAyer-SElective Rank reduction (LASER), can be done on a model after training has completed, and requires no additional parameters or data. LASER can dramatically boost predictive performance\u2014at times by 80% over the model\u2019s original performance\u2014on question-answering tasks and across various modalities for which Transformers are used.", "type": "Poster", @@ -33761,13 +33761,13 @@ "title": "Causal Modelling Agents: Causal Graph Discovery through Synergising Metadata- and Data-driven Reasoning", "authors": [ "Ahmed Abdulaal", - "Adamos Hadjivasiliou", - "Nina Monta\u00f1a-Brown", + "adamos hadjivasiliou", + "Nina Montana-Brown", "Tiantian He", "Ayodeji Ijishakin", "Ivana Drobnjak", - "Daniel Castro", - "Daniel Alexander" + "Daniel C. Castro", + "Daniel C. Alexander" ], "abstract": "Scientific discovery hinges on the effective integration of metadata, which refers to a set of 'cognitive' operations such as determining what information is relevant for inquiry, and data, which encompasses physical operations such as observation and experimentation. This paper introduces the Causal Modelling Agent (CMA), a novel framework that synergizes the metadata-based reasoning capabilities of Large Language Models (LLMs) with the data-driven modelling of Deep Structural Causal Models (DSCMs) for the task of causal discovery. We evaluate the CMA's performance on a number of benchmarks, as well as on the real-world task of modelling the clinical and radiological phenotype of Alzheimer's Disease (AD). Our experimental results indicate that the CMA can outperform previous data-driven or metadata-driven approaches to causal discovery. In our real-world application, we use the CMA to derive new insights into the causal relationships among biomarkers of AD.", "type": "Poster", @@ -33816,7 +33816,7 @@ }, { "id": 17787, - "title": "Large-scale training of foundation models for wearable biosignals", + "title": "Large-scale Training of Foundation Models for Wearable Biosignals", "authors": [ "Salar Abbaspourazad", "Oussama Elachqar", @@ -33844,7 +33844,7 @@ "Michal Lukasik", "Felix Yu", "Cho-Jui Hsieh", - "Inderjit Dhillon", + "Inderjit S Dhillon", "Sanjiv Kumar" ], "abstract": "Pretrained large language models (LLMs) are general purpose problem solvers applicable to a diverse set of tasks with prompts. They can be further improved towards a specific task by fine-tuning on a specialized dataset. However, fine-tuning usually makes the model narrowly specialized on this dataset with reduced general in-context learning performances, which is undesirable whenever the fine-tuned model needs to handle additional tasks where no fine-tuning data is available. In this work, we first demonstrate that fine-tuning on a single task indeed decreases LLMs' general in-context learning performance. We discover one important cause of such forgetting, format specialization, where the model overfits to the format of the fine-tuned task. We further show that format specialization happens at the very beginning of fine-tuning. To solve this problem, we propose Prompt Tuning with MOdel Tuning (ProMoT), a simple yet effective two-stage fine-tuning framework that reduces format specialization and improves generalization. ProMoT offloads task-specific format learning into additional and removable parameters by first doing prompt tuning and then fine-tuning the model itself with this soft prompt attached. With experiments on several fine-tuning tasks and 8 in-context evaluation tasks, we show that ProMoT achieves comparable performance on fine-tuned tasks to standard fine-tuning, but with much less loss of in-context learning performances across a board range of out-of-domain evaluation tasks. More importantly, ProMoT can even enhance generalization on in-context learning tasks that are semantically related to the fine-tuned task, e.g. ProMoT on En-Fr translation significantly improves performance on other language pairs, and ProMoT on NLI improves performance on summarization.Experiments also show that ProMoT can improve the generalization performance of multi-task training.", @@ -33895,7 +33895,7 @@ }, { "id": 17783, - "title": "Calibrated Chaos: Variance Between Runs of Neural Network Training is Harmless and Inevitable", + "title": "On the Variance of Neural Network Training with respect to Test Sets and Distributions", "authors": [ "Keller Jordan" ], @@ -34050,7 +34050,7 @@ "LI Yang", "RUIZHENG WU", "Jiyong Li", - "YINGCONG CHEN" + "Ying-Cong Chen" ], "abstract": "Learning surfaces from neural radiance field (NeRF) became a rising topic in Multi-View Stereo (MVS). Recent Signed Distance Function (SDF)-based methods demonstrated their ability to reconstruct exact 3D shapes of Lambertian scenes. However, their results on reflective scenes are unsatisfactory due to the entanglement of specular radiance and complicated geometry. To address the challenges, we propose a Gaussian-based representation of normals in SDF fields. Supervised by polarization priors, this representation guides the learning of geometry behind the specular reflection and capture more details than existing methods. Moreover, we propose a reweighting strategy in optimization process to alleviate the noise issue of polarization priors. To validate the effectiveness of our design, we capture polarimetric information and ground truth meshes in additional reflective scenes with various geometry. We also evaluated our framework on PANDORA dataset. Both qualitative and quantitative comparisons prove our method outperforms existing neural 3D reconstruction methods in reflective scenes by a large margin.", "type": "Poster", @@ -34063,7 +34063,7 @@ }, { "id": 17773, - "title": "EXPLORING RAIN-/DETAIL-AWARE REPRESENTATION FOR INSTANCE-SPECIFIC IMAGE DE-RAINING", + "title": "Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains", "authors": [ "Wu Ran", "Peirong Ma", @@ -34100,7 +34100,7 @@ "id": 17770, "title": "Benchmarking and Improving Generator-Validator Consistency of Language Models", "authors": [ - "Xiang Li", + "Xiang Lisa Li", "Vaishnavi Shrivastava", "Siyan Li", "Tatsunori Hashimoto", @@ -34123,7 +34123,7 @@ "Yilun Du", "Bo Dai", "Dale Schuurmans", - "Joshua B Tenenbaum", + "Joshua B. Tenenbaum", "Pieter Abbeel" ], "abstract": "Large text-to-video models trained on internet-scale data have demonstrated exceptional capabilities in generating high-fidelity videos from arbitrary textual descriptions. However, similar to proprietary language models, large text-to-video models are often black boxes whose weight parameters are not publicly available, posing a significant challenge to adapting these models to specific domains such as robotics, animation, and personalized stylization. Inspired by how a large language model can be prompted to perform new tasks without access to the model weights, we investigate how to adapt a black-box pretrained text-to-video model to a variety of downstream domains without weight access to the pretrained model. In answering this question, we propose \\emph{\\methodname}, which leverages the score function of a large pretrained video diffusion model as a probabilistic prior to guide the generation of a task-specific small video model. Our experiments show that, by incorporating broad knowledge and fidelity of the pretrained model probabilistically, a small model with as few as 1.25% parameters of the pretrained model can generate high-quality yet domain-specific videos for a variety of downstream domains such as animation, egocentric modeling, and modeling of simulated and real-world robotics data. As large text-to-video models starting to become available as a service similar to large language models, we advocate for private institutions to expose scores of video diffusion models as outputs in addition to generated videos to allow flexible adaptation of large pretrained text-to-video models by the general public.", @@ -34197,7 +34197,7 @@ "Yingyu Lin", "Yian Ma", "Yu-Xiang Wang", - "Rachel Redberg", + "Rachel Emily Redberg", "Zhiqi Bu" ], "abstract": "Posterior sampling, i.e., exponential mechanism to sample from the posterior distribution, provides $\\varepsilon$-pure differential privacy (DP) guarantees and does not suffer from potentially unbounded privacy breach introduced by $(\\varepsilon,\\delta)$-approximate DP. In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus re-introducing the unappealing $\\delta$-approximation error into the privacy guarantees. To bridge this gap, we propose the Approximate SAample Perturbation (abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to its Wasserstein-infinity ($W_\\infty$) distance from a reference distribution that satisfies pure DP or pure Gaussian DP (i.e., $\\delta=0$). We then leverage a Metropolis-Hastings algorithm to generate the sample and prove that the algorithm converges in W$_\\infty$ distance. We show that by combining our new techniques with a careful localization step, we obtain the first nearly linear-time algorithm that achieves the optimal rates in the DP-ERM problem with strongly convex and smooth losses.", @@ -34233,7 +34233,7 @@ "id": 17793, "title": "Improved Regret Bounds for Non-Convex Online-Within-Online Meta Learning", "authors": [ - "Jiechao GUAN", + "Jiechao Guan", "Hui Xiong" ], "abstract": "Online-Within-Online (OWO) meta learning stands for the online multi-task learning paradigm in which both tasks and data within each task become available in a sequential order. In this work, we study the OWO meta learning of the initialization and step size of within-task online algorithms in the non-convex setting, and provide improved regret bounds under mild assumptions of loss functions. Previous work analyzing this scenario has obtained for bounded and piecewise Lipschitz functions an averaged regret bound $O((\\frac{\\sqrt{m}}{T^{1/4}}+\\frac{(\\log{m})\\log{T}}{\\sqrt{T}}+V)\\sqrt{m})$ across $T$ tasks, with $m$ iterations per task and $V$ the task similarity. Our first contribution is to modify the existing non-convex OWO meta learning algorithm and improve the regret bound to $O((\\frac{1}{T^{1/2-\\alpha}}+\\frac{(\\log{T})^{9/2}}{T}+V)\\sqrt{m})$, for any $\\alpha \\in (0,1/2)$. The derived bound has a faster convergence rate with respect to $T$, and guarantees a vanishing task-averaged regret with respect to $m$ (for any fixed $T$). Then, we propose a new algorithm of regret $O((\\frac{\\log{T}}{T}+V)\\sqrt{m})$ for non-convex OWO meta learning. This regret bound exhibits a better asymptotic performance than previous ones, and holds for any bounded (not necessarily Lipschitz) loss functions. Besides the improved regret bounds, our contributions include investigating how to attain generalization bounds for statistical meta learning via regret analysis. Specifically, by online-to-batch arguments, we achieve a transfer risk bound for batch meta learning that assumes all tasks are drawn from a distribution. Moreover, by connecting multi-task generalization error with task-averaged regret, we develop for statistical multi-task learning a novel PAC-Bayes generalization error bound that involves our regret bound for OWO meta learning.", @@ -34327,7 +34327,7 @@ "id": 17796, "title": "Traveling Waves Encode The Recent Past and Enhance Sequence Learning", "authors": [ - "Andy Keller", + "T. Anderson Keller", "Lyle Muller", "Terrence Sejnowski", "Max Welling" @@ -34382,9 +34382,9 @@ "id": 17759, "title": "ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference", "authors": [ + "Krzysztof Kacprzyk", "Samuel Holt", "Jeroen Berrevoets", - "Krzysztof Kacprzyk", "Zhaozhi Qian", "Mihaela van der Schaar" ], @@ -34423,7 +34423,7 @@ "id": 17757, "title": "Quantifying the Sensitivity of Inverse Reinforcement Learning to Misspecification", "authors": [ - "Joar Skalse", + "Joar Max Viktor Skalse", "Alessandro Abate" ], "abstract": "Inverse reinforcement learning (IRL) aims to infer an agent's *preferences* (represented as a reward function $R$) from their *behaviour* (represented as a policy $\\pi$). To do this, we need a *behavioural model* of how $\\pi$ relates to $R$. In the current literature, the most common behavioural models are *optimality*, *Boltzmann-rationality*, and *causal entropy maximisation*. However, the true relationship between a human's preferences and their behaviour is much more complex than any of these behavioural models. This means that the behavioural models are *misspecified*, which raises the concern that they may lead to systematic errors if applied to real data. In this paper, we analyse how sensitive the IRL problem is to misspecification of the behavioural model. Specifically, we provide necessary and sufficient conditions that completely characterise how the observed data may differ from the assumed behavioural model without incurring an error above a given threshold. In addition to this, we also characterise the conditions under which a behavioural model is robust to small perturbations of the observed policy, and we analyse how robust many behavioural models are to misspecification of their parameter values (such as e.g. the discount rate). Our analysis suggests that the IRL problem is highly sensitive to misspecification, in the sense that very mild misspecification can lead to very large errors in the inferred reward function.", @@ -34444,7 +34444,7 @@ "Tal Schuster", "Adam Yala", "Jae Ho Sohn", - "Tommi Jaakkola", + "Tommi S. Jaakkola", "Regina Barzilay" ], "abstract": "In this paper, we propose a novel approach to conformal prediction for language models (LMs) in which we produce prediction sets with performance guarantees. LM responses are typically sampled from a predicted distribution over the large, combinatorial output space of language. Translating this to conformal prediction, we calibrate a stopping rule for sampling LM outputs that get added to a growing set of candidates until we are confident that the set covers at least one acceptable response. Since some samples may be low-quality, we also simultaneously calibrate a rejection rule for removing candidates from the output set to reduce noise. Similar to conformal prediction, we can prove that the final output set obeys certain desirable distribution-free guarantees. Within these sets of candidate responses, we also show that we can also identify subsets of individual components---such as phrases or sentences---that are each independently correct (e.g., that are not ``hallucinations''), again with guarantees. Our method can be applied to any LM API that supports sampling. Furthermore, we empirically demonstrate that we can achieve many desired coverage levels within a limited number of total samples when applying our method to multiple tasks in open-domain question answering, text summarization, and radiology report generation using different LM variants.", @@ -34463,7 +34463,7 @@ "Arpit Bansal", "Hong-Min Chu", "Avi Schwarzschild", - "Roni Sengupta", + "Soumyadip Sengupta", "Micah Goldblum", "Jonas Geiping", "Tom Goldstein" @@ -34598,13 +34598,13 @@ "id": 17738, "title": "Learning Grounded Action Abstractions from Language", "authors": [ - "Catherine Wong", + "Lionel Wong", "Jiayuan Mao", "Pratyusha Sharma", - "Zachary Siegel", + "Zachary S Siegel", "Jiahai Feng", "Noa Korneev", - "Joshua B Tenenbaum", + "Joshua B. Tenenbaum", "Jacob Andreas" ], "abstract": "Long-horizon planning is dauntingly hard -- it requires modeling relevant aspects of the environment and searching over large, complex action spaces. \\textit{Hierarchical planning} approaches make complex problems more tractable using temporal \\textit{action abstractions}, decomposing hard tasks into smaller abstract subproblems that can be solved modularly. However, actually learning useful action abstractions has long posed significant challenges without human expert knowledge. Here, we introduce a system that leverages background information in language to learn a \\textit{library of symbolic action abstractions and accompanying low-level policies} that can be composed to solve increasingly complex tasks. Our approach queries large language models (LLMs) as a prior for proposing useful symbolic action definitions, but integrates these proposals into a formal hierarchical planning system to ground and verify proposed actions. On two language-guided interactive planning domains (\\textit{Mini Minecraft} and the \\textit{ALFRED Household Tasks} benchmark), our approach far outperforms other baseline approaches that use LLMs in planning, enabling far more accurate planning and enable better generalization to more complex tasks.", @@ -34656,7 +34656,7 @@ }, { "id": 17735, - "title": "FairSeg: A Large-scale Medical Image Segmentation Dataset for Fairness Learning with Fair Error-Bound Scaling", + "title": "FairSeg: A Large-Scale Medical Image Segmentation Dataset for Fairness Learning Using Segment Anything Model with Fair Error-Bound Scaling", "authors": [ "Yu Tian", "Min Shi", @@ -34678,7 +34678,7 @@ "id": 17734, "title": "Masks, Signs, And Learning Rate Rewinding", "authors": [ - "Advait Gadhikar", + "Advait Harshal Gadhikar", "Rebekka Burkholz" ], "abstract": "Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks. While both iterative pruning schemes couple structure and parameter learning, understanding how LRR excels in both aspects can bring us closer to the design of more flexible deep learning algorithms that can optimize diverse sets of sparse architectures. To this end, we conduct experiments that disentangle the effect of mask learning and parameter optimization and how both benefit from overparameterization. The ability of LRR to flip parameter signs early and stay robust to sign perturbations seems to make it not only more effective in mask identification but also in optimizing diverse sets of masks, including random ones. In support of this hypothesis, we prove in a simplified single hidden neuron setting that LRR succeeds in more cases than IMP, as it can escape initially problematic sign configurations.", @@ -34698,10 +34698,10 @@ "Abdulrahman Mahmoud", "Michal Kurek", "Simone Campanoni", - "Gu-Yeon Wei", + "David Brooks", "Stephen Chong", "Gu-Yeon Wei", - "Alexander Rush" + "Alexander M Rush" ], "abstract": "Maintaining legacy software requires many software and systems engineering hours. Assembly code programs, which demand low-level control over the computer machine state and have no variable names, are particularly difficult for humans to analyze.Existing conventional program translators guarantee correctness, but are hand-engineered for the source and target programming languages in question. Learned transpilation, i.e. automatic translation of code, offers an alternative to manual re-writing and engineering efforts. Automated symbolic program translation approaches guarantee correctness but struggle to scale to longer programs due to the exponentially large search space. Their rigid rule-based systems also limit their expressivity, so they can only reason about a reduced space of programs. Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness. In this work, we leverage the strengths of LMs and symbolic solvers in a neurosymbolic approach to learned transpilation for assembly code. Assembly code is an appropriate setting for a neurosymbolic approach, since assembly code can be divided into shorter non-branching basic blocks amenable to the use of symbolic methods. Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence of the transpilation input and output. We test Guess & Sketch on three different test sets of assembly transpilation tasks, varying in difficulty, and show that it successfully transpiles 57.6% more examples than GPT-4 and 39.6% more examples than an engineered transpiler. We also share a training and evaluation dataset for this task.", "type": "Poster", @@ -34732,7 +34732,7 @@ }, { "id": 17731, - "title": "NP-GL: Extending Power of Nature from Binary Problems to Real-World Graph Learning", + "title": "Extending Power of Nature from Binary to Real-Valued Graph Learning in Real World", "authors": [ "Chunshu Wu", "Ruibing Song", @@ -34868,7 +34868,7 @@ }, { "id": 17724, - "title": "Neural Probabilistic Protein-Protein Docking via a Differentiable Energy Model", + "title": "EBMDock: Neural Probabilistic Protein-Protein Docking via a Differentiable Energy Model", "authors": [ "Huaijin Wu", "Wei Liu", @@ -34909,8 +34909,8 @@ "authors": [ "Manju Garimella", "Denizhan Pak", - "Justin Wood", - "Samantha Wood" + "Justin Newell Wood", + "Samantha Marie Waters Wood" ], "abstract": "Newborn brains rapidly learn to solve challenging object recognition tasks, including segmenting objects from backgrounds and recognizing objects across novel backgrounds and viewpoints. Conversely, modern machine-learning (ML) algorithms are \"data hungry,\" requiring more training data than brains to reach similar performance levels. How do we close this learning gap between brains and machines? Here we introduce a new benchmark\u2014a Newborn Embodied Turing Test (NETT) for object segmentation\u2014in which newborn animals and machines are raised in the same environments and tested with the same tasks, permitting direct comparison of their learning abilities. First, we raised newborn chicks in controlled environments containing a single object rotating on a single background, then tested their ability to recognize that object across new backgrounds and viewpoints. Second, we performed \u201cdigital twin\u201d experiments in which we reared and tested artificial chicks in virtual environments that mimicked the rearing and testing conditions of the biological chicks. We inserted a variety of ML \u201cbrains\u201d into the artificial chicks and measured whether those algorithms learned common object recognition behavior as biological chicks. All biological chicks solved this one-shot object segmentation task, successfully learning background-invariant object representations that generalized across new backgrounds and viewpoints. In contrast, none of the artificial chicks solved this object segmentation task, instead learning background-dependent representations that failed to generalize across new backgrounds and viewpoints. This digital twin design exposes core limitations in current ML algorithms in achieving brain-like object perception. Our NETT is publicly available for comparing ML algorithms with newborn chicks. Ultimately, we anticipate that NETT benchmarks will allow researchers to build embodied AI systems that learn as efficiently and robustly as newborn brains.", "type": "Poster", @@ -34927,7 +34927,7 @@ "authors": [ "Andi Peng", "Ilia Sucholutsky", - "Belinda Li", + "Belinda Z. Li", "Theodore Sumers", "Thomas L. Griffiths", "Jacob Andreas", @@ -35022,10 +35022,10 @@ }, { "id": 17715, - "title": "Grounding Language Plans in Demonstrations Through Counter-Factual Perturbations", + "title": "Grounding Language Plans in Demonstrations Through Counterfactual Perturbations", "authors": [ "Yanwei Wang", - "Johnson (Tsun-Hsuan) Wang", + "Tsun-Hsuan Wang", "Jiayuan Mao", "Michael Hagenow", "Julie Shah" @@ -35071,7 +35071,7 @@ "Max Heitmann", "Halfdan Holm", "Charlie Griffin", - "Joar Skalse" + "Joar Max Viktor Skalse" ], "abstract": "Most algorithms in reinforcement learning (RL) require that the objective is formalised with a Markovian reward function. However, it is well-known that certain tasks cannot be expressed by means of an objective in the Markov rewards formalism, motivating the study of alternative objective-specification formalisms in RL such as Linear Temporal Logic and Multi-Objective Reinforcement Learning. To date, there has not yet been any thorough analysis of how these formalisms relate to each other in terms of their expressivity. We fill this gap in the existing literature by providing a comprehensive comparison of 17 salient objective-specification formalisms. We place these formalisms in a preorder based on their expressive power, and present this preorder as a Hasse diagram. We find a variety of limitations for the different formalisms, and argue that no formalism is both dominantly expressive and straightforward to optimise with current techniques. For example, we prove that each of Regularised RL, (Outer) Nonlinear Markov Rewards, Reward Machines, Linear Temporal Logic, and Limit Average Rewards can express a task that the others cannot. The significance of our results is twofold. First, we identify important expressivity limitations to consider when specifying objectives for policy optimization. Second, our results highlight the need for future research which adapts reward learning to work with a greater variety of formalisms, since many existing reward learning methods assume that the desired objective takes a Markovian form. Our work contributes towards a more cohesive understanding of the costs and benefits of different RL objective-specification formalisms.", "type": "Poster", @@ -35105,9 +35105,9 @@ "authors": [ "Andrei Lupu", "Chris Lu", - "Jarek Liesen", - "Robert Lange", - "Jakob Foerster" + "Jarek Luca Liesen", + "Robert Tjarko Lange", + "Jakob Nicolaus Foerster" ], "abstract": "Dataset distillation aims to condense large datasets into a small number of synthetic examples that can be used as drop-in replacements when training new models. It has applications to interpretability, neural architecture search, privacy, and continual learning. Despite strong successes in supervised domains, such methods have not yet been extended to reinforcement learning, where the lack of fixed dataset renders most distillation methods unusable.Filling the gap, we formalize $\\textit{behaviour distillation}$, a setting that aims to discover and then condense the information required for training an expert policy into a synthetic dataset of state-action pairs, $\\textit{without access to expert data}$. We then introduce Hallucinating Datasets with Evolution Strategies (HaDES), a method for behaviour distillation that can discover datasets of $\\textit{just four}$ state-action pairs which, under supervised learning, train agents to competitive performance levels in continuous control tasks.We show that these datasets generalize out of distribution to training policies with a wide range of architectures and hyperparameters. We also demonstrate application to a downstream task, namely training multi-task agents in a zero-shot fashion.Beyond behaviour distillation, HaDES provides significant improvements in neuroevolution for RL over previous approaches and achieves SoTA results on one standard supervised dataset distillation task. Finally, we show that visualizing the synthetic datasets can provide human-interpretable task insights.", "type": "Poster", @@ -35120,7 +35120,7 @@ }, { "id": 17710, - "title": "From Graphs to Hypergraphs: Hypergraph Projection and its Remediation", + "title": "From Graphs to Hypergraphs: Hypergraph Projection and its Reconstruction", "authors": [ "Yanbang Wang", "Jon Kleinberg" @@ -35136,7 +35136,7 @@ }, { "id": 17709, - "title": "Canonpipe: Data Debugging with Shapley Importance over Machine Learning Pipelines", + "title": "Data Debugging with Shapley Importance over Machine Learning Pipelines", "authors": [ "Bojan Karla\u0161", "David Dao", @@ -35293,9 +35293,9 @@ "id": 17701, "title": "Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks", "authors": [ - "David Robin", + "David A. R. Robin", "Kevin Scaman", - "marc lelarge" + "Marc Lelarge" ], "abstract": "We present a framework to define a large class of neural networks for which, by construction, training by gradient flow provably reaches arbitrarily low loss when the number of parameters grows. Distinct from the fixed-space global optimality of non-convex optimization, this new form of convergence, and the techniques introduced to prove such convergence, pave the way for a usable deep learning convergence theory in the near future, without overparameterization assumptions relating the number of parameters and training samples. We define these architectures from a simple computation graph and a mechanism to lift it, thus increasing the number of parameters, generalizing the idea of increasing the widths of multi-layer perceptrons. We show that architectures similar to most common deep learning models are present in this class, obtained by sparsifying the weight tensors of usual architectures at initialization. Leveraging tools of algebraic topology and random graph theory, we use the computation graph\u2019s geometry to propagate properties guaranteeing convergence to any precision for these large sparse models.", "type": "Poster", @@ -35314,7 +35314,7 @@ "Jaehong Yoon", "DaHyun Kim", "Sung Ju Hwang", - "Chang Yoo" + "Chang D. Yoo" ], "abstract": "Neural Implicit Representation (NIR) has recently gained significant attention due to its remarkable ability to encode complex and high-dimensional data into representation space and easily reconstruct it through a trainable mapping function. However, NIR methods assume a one-to-one mapping between the target data and representation models regardless of data relevancy or similarity. This results in poor generalization over multiple complex data and limits their efficiency and scalability. Motivated by continual learning, this work investigates how to accumulate and transfer neural implicit representations for multiple complex video data over sequential encoding sessions. To overcome the limitation of NIR, we propose a novel method, Progressive Fourier Neural Representation (PFNR), that aims to find an adaptive and compact sub-module in Fourier space to encode videos in each training session. This sparsified neural encoding allows the neural network to hold free weights, enabling an improved adaptation for future videos. In addition, when learning a representation for a new video, PFNR transfers the representation of previous videos with frozen weights. This design allows the model to continuously accumulate high-quality neural representations for multiple videos while ensuring lossless decoding that perfectly preserves the learned representations for previous videos. We validate our PFNR method on the UVG8/17 video sequence benchmarks and achieve impressive performance gains over strong continual learning baselines.", "type": "Poster", @@ -35446,11 +35446,11 @@ }, { "id": 17691, - "title": "Octavius: Mitigating Task Interference in MLLMs via MoE", + "title": "Octavius: Mitigating Task Interference in MLLMs via LoRA-MoE", "authors": [ "Zeren Chen", - "ziqin wang", - "zhen wang", + "Ziqin Wang", + "Zhen Wang", "Huayang Liu", "Zhenfei Yin", "Si Liu", @@ -35667,7 +35667,7 @@ "id": 17679, "title": "Federated Wasserstein Distance", "authors": [ - "alain rakotomamonjy", + "Alain Rakotomamonjy", "Kimia Nadjahi", "Liva Ralaivola" ], @@ -35738,7 +35738,7 @@ "id": 17674, "title": "Conformal Prediction via Regression-as-Classification", "authors": [ - "Etash Guha", + "Etash Kumar Guha", "Shlok Natarajan", "Thomas M\u00f6llenhoff", "Mohammad Emtiyaz Khan", @@ -35755,7 +35755,7 @@ }, { "id": 17673, - "title": "Long-Short-Range Message-Passing: A Fragmentation-Based Framework to Capture Non-Local Atomistic Interactions", + "title": "Long-Short-Range Message-Passing: A Physics-Informed Framework to Capture Non-Local Interaction for Scalable Molecular Dynamics Simulation", "authors": [ "Yunyang Li", "Yusong Wang", @@ -35811,7 +35811,7 @@ }, { "id": 17671, - "title": "Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Metrics", + "title": "Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Objectives", "authors": [ "Shrinivas Ramasubramanian", "Harsh Rangwani", @@ -35837,9 +35837,9 @@ "Jason Yim", "Raman Samusevich", "Shahar Bracha", - "Tommi Jaakkola", + "Tommi S. Jaakkola", "Regina Barzilay", - "Ila Fiete" + "Ila R Fiete" ], "abstract": "The ability to engineer novel proteins with higher fitness for a desired property would be revolutionary for biotechnology and medicine. Modeling the combinatorially large space of sequences is infeasible; prior methods often constrain optimization to a small mutational radius, but this drastically limits the design space. Instead of heuristics, we propose smoothing the fitness landscape to facilitate protein optimization. First, we formulate protein fitness as a graph signal then use Tikunov regularization to smooth the fitness landscape. We find optimizing in this smoothed landscape leads to improved performance across multiple methods in the GFP and AAV benchmarks. Second, we achieve state-of-the-art results utilizing discrete energy-based models and MCMC in the smoothed landscape. Our method, called Gibbs sampling with Graph-based Smoothing (GGS), demonstrates a unique ability to achieve 2.5 fold fitness improvement (with in-silico evaluation) over its training set. GGS demonstrates potential to optimize proteins in the limited data regime. Code: https://github.com/kirjner/GGS", "type": "Poster", @@ -35856,7 +35856,7 @@ "id": 17669, "title": "CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images", "authors": [ - "olga fourkioti", + "Olga Fourkioti", "Matt De Vries", "Chris Bakal" ], @@ -35873,10 +35873,10 @@ "id": 17668, "title": "Neural SDF Flow for 3D Reconstruction of Dynamic Scenes", "authors": [ - "wei mao", + "Wei Mao", "Richard Hartley", "Mathieu Salzmann", - "Wei Mao" + "miaomiao Liu" ], "abstract": "In this paper, we tackle the problem of 3D reconstruction of dynamic scenes from multi-view videos. Previous works attempt to model the motion of 3D points in space, which either constrains them to handle a single articulated object or requires extra efforts to handle topology changes. By contrast, we propose to directly estimate the change of Signed Distance Function (SDF), namely SDF flow, of the dynamic scene. We show that the SDF flow captures the evolution of the scene surface and handles topology changes naturally. We further derive the mathematical relation between the SDF flow and the scene flow, which allows us to calculate the scene flow from the SDF flow analytically by solving linear equations. Our experiments on real-world multi-view video datasets show that our reconstructions are better than those of the state-of-the-art methods.", "type": "Poster", @@ -35912,7 +35912,7 @@ "Xiangyu Qi", "Ping He", "Yiming Li", - "Jiachen (Tianhao) Wang", + "Jiachen T. Wang", "Prateek Mittal" ], "abstract": "We present a novel defense, against backdoor attacks on Deep Neural Networks (DNNs), wherein adversaries covertly implant malicious behaviors (backdoors) into DNNs. Our defense falls within the category of post-development defenses that operate independently of how the model was generated. The proposed defense is built upon a novel reverse engineering approach that can directly extract **backdoor functionality** of a given backdoored model to a *backdoor expert* model. The approach is straightforward --- finetuning the backdoored model over a small set of intentionally mislabeled clean samples, such that it unlearns the normal functionality while still preserving the backdoor functionality, and thus resulting in a model~(dubbed a backdoor expert model) that can only recognize backdoor inputs. Based on the extracted backdoor expert model, we show the feasibility of devising highly accurate backdoor input detectors that filter out the backdoor inputs during model inference. Further augmented by an ensemble strategy with a finetuned auxiliary model, our defense, **BaDExpert** (**Ba**ckdoor Input **D**etection with Backdoor **Expert**), effectively mitigates 17 SOTA backdoor attacks while minimally impacting clean utility. The effectiveness of BaDExpert has been verified on multiple datasets (CIFAR10, GTSRB and ImageNet) across various model architectures (ResNet, VGG, MobileNetV2 and Vision Transformer).", @@ -35999,7 +35999,7 @@ "id": 17659, "title": "Memory-Assisted Sub-Prototype Mining for Universal Domain Adaptation", "authors": [ - "Yuxiang (YU-HSIANG) LAI", + "Yuxiang Lai", "Yi Zhou", "Xinghong Liu", "Tao Zhou" @@ -36034,7 +36034,7 @@ }, { "id": 17656, - "title": "Addressing Catastrophic Forgetting and Loss of Plasticity in Neural Networks", + "title": "Addressing Loss of Plasticity and Catastrophic Forgetting in Continual Learning", "authors": [ "Mohamed Elsayed", "A. Rupam Mahmood" @@ -36079,7 +36079,7 @@ "Guibin Zhang", "Kai Wang", "Xiaojiang Peng", - "yu zheng", + "Yu Zheng", "Yuxuan Liang", "Yang Wang" ], @@ -36116,7 +36116,7 @@ "authors": [ "Chaohua Shi", "Kexin Huang", - "Lu Gan", + "Lu GAN", "Hongqing Liu", "Mingrui Zhu", "Nannan Wang", @@ -36200,7 +36200,7 @@ "Nick Richardson", "Deniz Oktay", "Yaniv Ovadia", - "James Bowden", + "James C Bowden", "Ryan P Adams" ], "abstract": "Integrals with discontinuous integrands are ubiquitous, arising from discrete structure in applications like topology optimization, graphics, and computational geometry. These integrals are often part of a forward model in an inverse problem where it is necessary to reason backwards about the parameters, ideally using gradient-based optimization. Monte Carlo methods are widely used to estimate the value of integrals, but this results in a non-differentiable approximation that is amenable to neither conventional automatic differentiation nor reparameterization-based gradient methods. This significantly disrupts efforts to integrate machine learning methods in areas that exhibit these discontinuities: physical simulation and robotics, design, graphics, and computational geometry. Although bespoke domain-specific techniques can handle special cases, a general methodology to wield automatic differentiation in these discrete contexts is wanting. We introduce a differentiable variant of the simple Monte Carlo estimator which samples line segments rather than points from the domain. We justify our estimator analytically as conditional Monte Carlo and demonstrate the diverse functionality of the method as applied to image stylization, topology optimization, and computational geometry.", @@ -36240,7 +36240,7 @@ "Tao Zhong", "Huan Liu", "YUANHAO YU", - "Konstantinos Plataniotis", + "Konstantinos N Plataniotis", "Yang Wang" ], "abstract": "In this paper, we aim to adapt a model at test-time using a few unlabeled data to address distribution shifts. In this setting, extracting the domain knowledge from a limited amount of data is challenging. To improve such a process, it is crucial to utilize correlated information from pre-trained backbones and source domains. Previous studies fail to utilize recent foundation models with strong out-of-distribution generalization. Additionally, domain-centric designs are not flavored in their works. Furthermore, they employ the process of modelling source domains and the process of learning to adapt independently into disjoint training stages. In this work, we propose an approach on top of the pre-computed features of the foundation model. Specifically, we build a knowledge bank to learn the transferable knowledge from source domains. Conditioned on few-shot target data, we introduce a domain prompt generator to condense the knowledge bank into a domain-specific prompt. The domain prompt then directs the visual features towards a particular domain via a guidance module. Moreover, we propose a domain-aware contrastive loss and employ meta-learning to facilitate domain knowledge extraction. Extensive experiments are conducted to validate the domain knowledge extraction. The proposed method outperforms previous work significantly on 5 large-scale benchmarks including WILDS and DomainNet.", @@ -36260,7 +36260,7 @@ "Sebastian Cygert", "Valeriya Khan", "Tomasz Trzcinski", - "Bartosz Zieli\u0144ski", + "Bartosz Micha\u0142 Zieli\u0144ski", "Bart\u0142omiej Twardowski" ], "abstract": "Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation, we introduce a novel approach named SEED. SEED selects only one, the most optimal expert for a considered task, and uses data from this task to fine-tune only this expert. For this purpose, each expert represents each class with a Gaussian distribution, and the optimal expert is selected based on the similarity of those distributions. Consequently, SEED increases diversity and heterogeneity within the experts while maintaining the high stability of this ensemble method. The extensive experiments demonstrate that SEED achieves state-of-the-art performance in exemplar-free settings across various scenarios, showing the potential of expert diversification through data in continual learning.", @@ -36297,17 +36297,17 @@ "id": 17644, "title": "Evaluating Representation Learning on the Protein Structure Universe", "authors": [ - "Arian Jamasb", + "Arian Rokkum Jamasb", "Alex Morehead", "Zuobai Zhang", - "Chaitanya Joshi", + "Chaitanya K. Joshi", "Kieran Didi", - "Simon Mathis", + "Simon V Mathis", "Charles Harris", "Jian Tang", "Jianlin Cheng", "Pietro Lio", - "Tom Blundell" + "Tom Leon Blundell" ], "abstract": "Protein structure representation learning is the foundation for promising applications in drug discovery, protein design, and protein function prediction. However, there remains a need for a robust, standardised benchmark to track the progress of new and established methods with greater granularity and relevance to downstream applications. In this work, we introduce a comprehensive and open benchmark suite for evaluating protein structure representation learning methods.We provide several pre-training methods, downstream tasks and pre-training corpora comprised of both experimental and predicted structures, offering a balanced challenge to representation learning algorithms. These tasks enable the systematic evaluation of the quality of the learned embeddings, the structural and functional relationships captured, and their usefulness in downstream tasks. We benchmark state-of-the-art protein-specific and generic geometric Graph Neural Networks and the extent to which they benefit from different types of pre-training. We find that pre-training consistently improves the performance of both rotation-invariant and equivariant models, and that equivariant models seem to benefit even more from pre-training compared to invariant models.We aim to establish a common ground for the machine learning and computational biology communities to collaborate, compare, and advance protein structure representation learning. By providing a standardised and rigorous evaluation platform, we expect to accelerate the development of novel methodologies and improve our understanding of protein structures and their functions. The codebase incorporates several engineering contributions which considerably reduces the barrier to entry for pre-training and working with large structure-based datasets. Our benchmark is available at: https://anonymous.4open.science/r/ProteinWorkshop-B8F5/", "type": "Poster", @@ -36323,7 +36323,7 @@ "title": "Classification with Conceptual Safeguards", "authors": [ "Hailey Joren", - "Charles Marx", + "Charles Thomas Marx", "Berk Ustun" ], "abstract": "Machine learning models are often used to automate routine tasks. In settings where mistakes are costly, we can trade off accuracy for coverage by abstaining from making a prediction on instances for which the model is uncertain. In this work, we present a new approach to selective classification in deep learning with concepts. Our approach constructs a concept bottleneck model where the front-end model can make predictions given soft concepts and leverage concept confirmation to improve coverage and performance under abstention. We develop techniques to propagate uncertainty and identify concepts for confirmation. We evaluate our approach on real-world and synthetic datasets, showing that it can improve coverage while maintaining performance across a range of tasks.", @@ -36343,7 +36343,7 @@ "Brihi Joshi", "Skyler Hallinan", "Ximing Lu", - "Liunian Li", + "Liunian Harold Li", "Aaron Chan", "Jack Hessel", "Yejin Choi", @@ -36362,11 +36362,11 @@ "id": 17623, "title": "Language Model Inversion", "authors": [ - "John X. Morris", + "John Xavier Morris", "Wenting Zhao", - "Justin Chiu", + "Justin T Chiu", "Vitaly Shmatikov", - "Alexander Rush" + "Alexander M Rush" ], "abstract": "Given a prompt, language models produce a distribution over all possible next tokens; when the prompt is unknown, can we use this distributional information to recover the prompt? We consider the problem of anguage model inversion and show that next-token probabilities contain a surprising amount of information about the preceding text. Often we can recover the text in cases where it is hidden from the user, motivating a method for recovering unknown prompts given only the model's current distribution output. We consider a variety of model access scenarios, and show how even without predictions for every token in the vocabulary we can recover the probability vector through search and reconstruction of the input. On LLAMA-7B, our inversion method reconstructs prompts with a BLEU of $59$ and token-level F1 of $77$ and recovers $23\\%$ of prompts exactly", "type": "Poster", @@ -36381,7 +36381,7 @@ "id": 17622, "title": "How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data", "authors": [ - "Mihaela Stoian", + "Mihaela C Stoian", "Salijona Dyrmishi", "Maxime Cordy", "Thomas Lukasiewicz", @@ -36439,7 +36439,7 @@ }, { "id": 17619, - "title": "Generalization of Deep ResNets in the Mean-Field Regime", + "title": "Generalization of Scaled Deep ResNets in the Mean-Field Regime", "authors": [ "Yihang Chen", "Fanghui Liu", @@ -36463,7 +36463,7 @@ "Jiatao Gu", "Shuangfei Zhai", "Yizhe Zhang", - "Joshua Susskind", + "Joshua M. Susskind", "Navdeep Jaitly" ], "abstract": "Diffusion models are the de-facto approach for generating high-quality images and videos, but learning high-dimensional models remains a formidable task due to computational and optimization challenges. Existing methods often resort to training cascaded models in pixel space, or using a downsampled latent space of a separately trained auto-encoder. In this paper, we introduce Matryoshka Diffusion (MDM), an end-to-end framework for high-resolution image and video synthesis. We propose a diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture where features and parameters for small-scale inputs are nested within those of large scales. In addition, MDM enables a progressive training schedule from lower to higher resolutions, which leads to significant improvements in optimization for high-resolution generation. We demonstrate the effectiveness of our approach on various benchmarks, including class-conditioned image generation, high-resolution text-to-image, and text-to-video applications. Remarkably, we can train a single pixel-space model at resolutions of up to 1024x1024 pixels, demonstrating strong zero-shot generalization using the CC12M dataset, which contains only 12 million images.", @@ -36543,7 +36543,7 @@ "Wenhao Zhan", "Masatoshi Uehara", "Nathan Kallus", - "Jason Lee", + "Jason D. Lee", "Wen Sun" ], "abstract": "In this paper, we investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback where feedback is available in the form of preference between trajectory pairs rather than explicit rewards. Our proposed algorithm consists of two main steps: (1) estimate the implicit reward using Maximum Likelihood Estimation (MLE) with general function approximation from offline data and (2) solve a distributionally robust planning problem over a confidence set around the MLE. We consider the general reward setting where the reward can be defined over the whole trajectory and provide a novel guarantee that allows us to learn any target policy with a polynomial number of samples, as long as the target policy is covered by the offline data. This guarantee is the first of its kind with general function approximation. To measure the coverage of the target policy, we introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability coefficient. We also establish lower bounds that highlight the necessity of such concentrability and the difference from standard RL, where state-action-wise rewards are directly observed. We further extend and analyze our algorithm when the feedback is given over action pairs.", @@ -36561,7 +36561,7 @@ "authors": [ "Pengcheng Jiang", "Cao Xiao", - "Adam Cross", + "Adam Richard Cross", "Jimeng Sun" ], "abstract": "Clinical predictive models often rely on patients\u2019 electronic health records (EHR), but integrating medical knowledge to enhance predictions and decision-making is challenging. This is because personalized predictions require personalized knowledgegraphs (KGs), which are difficult to generate from patient EHR data. To address this, we propose GraphCare, an open-world framework that uses external KGs to improve EHR-based predictions. Our method extracts knowledge from large language models (LLMs) and external biomedical KGs to build patient-specific KGs, which are then used to train our proposed Bi-attention AugmenTed(BAT) graph neural network (GNN) for healthcare predictions. On two public datasets, MIMIC-III and MIMIC-IV, GraphCare surpasses baselines in four vital healthcare prediction tasks: mortality, readmission, length of stay (LOS), and drug recommendation. On MIMIC-III, it boosts AUROC by 17.6% and 6.6% for mortality and readmission, and F1-score by 7.9% and 10.8% for LOS and drug recommendation, respectively. Notably, GraphCare demonstrates a substantial edge in scenarios with limited data availability. Our findings highlight the potential of using external KGs in healthcare prediction tasks and demonstrate the promise of GraphCare in generating personalized KGs for promoting personalized medicine.", @@ -36633,8 +36633,8 @@ "title": "Compositional Preference Models for Aligning LMs", "authors": [ "Dongyoung Go", - "Tomek Korbak", - "Germ\u00e0n Kruszewski", + "Tomasz Korbak", + "Germ\u00e1n Kruszewski", "Jos Rozen", "Marc Dymetman" ], @@ -36862,7 +36862,7 @@ "authors": [ "Yameng Peng", "Andy Song", - "Haytham Fayek", + "Haytham M. Fayek", "Vic Ciesielski", "Xiaojun Chang" ], @@ -36881,14 +36881,14 @@ "authors": [ "Mrinank Sharma", "Meg Tong", - "Tomek Korbak", + "Tomasz Korbak", "David Duvenaud", "Amanda Askell", - "Sam Bowman", + "Samuel R. Bowman", "Esin DURMUS", "Zac Hatfield-Dodds", - "Scott Johnston", - "Shauna Kravec", + "Scott R Johnston", + "Shauna M Kravec", "Timothy Maxwell", "Sam McCandlish", "Kamal Ndousse", @@ -36911,7 +36911,7 @@ "id": 17592, "title": "Learning invariant representations of time-homogeneous stochastic dynamical systems", "authors": [ - "Vladimir Kostic", + "Vladimir R Kostic", "Pietro Novelli", "Riccardo Grazzi", "Karim Lounici", @@ -36934,7 +36934,7 @@ "Itay Evron", "Nir Weinberger", "Daniel Soudry", - "Paul Hand" + "PAul HAnd" ], "abstract": "In continual learning, catastrophic forgetting is affected by multiple aspects of the tasks. Previous works have analyzed separately how forgetting is affected by either task similarity or overparameterization. In contrast, our paper examines how task similarity and overparameterization jointly affect forgetting in an analyzable model. Specifically, we focus on two-task continual linear regression, where the second task is a random orthogonal transformation of an arbitrary first task (an abstraction of random permutation tasks). We derive an exact analytical expression for the expected forgetting \u2014 and uncover a nuanced pattern. In highly overparameterized models, intermediate task similarity causes the most forgetting. However, near the interpolation threshold, forgetting decreases monotonically with the expected task similarity. We validate our findings with linear regression on synthetic data, and with neural networks on established permutation task benchmarks.", "type": "Poster", @@ -36951,7 +36951,7 @@ "authors": [ "Nithin Chalapathi", "Yiheng Du", - "Aditi Krishnapriyan" + "Aditi S. Krishnapriyan" ], "abstract": "Imposing known physical constraints, such as conservation laws, during neural network training introduces an inductive bias that can improve accuracy, reliability, convergence, and data efficiency for modeling physical dynamics. While such constraints can be softly imposed via loss function penalties, recent advancements in differentiable physics and optimization improve performance by incorporating PDE-constrained optimization as individual layers in neural networks. This enables a stricter adherence to physical constraints. However, imposing hard constraints significantly increases computational and memory costs, especially for complex dynamical systems. This is because it requires solving an optimization problem over a large number of points in a mesh, representing spatial and temporal discretizations, which greatly increases the complexity of the constraint. To address this challenge, we develop a scalable approach to enforce hard physical constraints using Mixture-of-Experts (MoE), which can be used with any neural network architecture. Our approach imposes the constraint over smaller decomposed domains, each of which is solved by an ``expert'' through differentiable optimization. During training, each expert independently performs a localized backpropagation step by leveraging the implicit function theorem; the independence of each expert allows for parallelization across multiple GPUs. Compared to standard differentiable optimization, our scalable approach achieves greater accuracy in the neural PDE solver setting for predicting the dynamics of challenging non-linear systems. We also improve training stability and require significantly less computation time during both training and inference stages.", "type": "Poster", @@ -36994,11 +36994,11 @@ "Max Schwarzer", "Harsh Agrawal", "Bogdan Mazoure", - "Katherine Metcalf", + "Rin Metcalf", "Walter Talbott", "Natalie Mackraz", "R Devon Hjelm", - "Alexander Toshev" + "Alexander T Toshev" ], "abstract": "We show that large language models (LLMs) can be adapted to be generalizable policies for embodied visual tasks. Our approach, called Large LAnguage model Reinforcement Learning Policy (LLaRP), adapts a pre-trained frozen LLM to take as input text instructions and visual egocentric observations and output actions directly in the environment. Using reinforcement learning, we train LLaRP to see and act solely through environmental interactions. We show that LLaRP is robust to complex paraphrasings of task instructions and can generalize to new tasks that require novel optimal behavior. In particular, on 1,000 unseen tasks it achieves 42% success rate, 1.7x the success rate of other common learned baselines or zero-shot applications of LLMs. Finally, to aid the community in studying language conditioned, massively multi-task, embodied AI problems we release a novel benchmark, Language Rearrangement, consisting of 150,000 training and 1,000 testing tasks for language-conditioned rearrangement.", "type": "Poster", @@ -37047,7 +37047,7 @@ }, { "id": 17586, - "title": "Augmenting transformers with recursively composed multi-grained representations", + "title": "Augmenting Transformers with Recursively Composed Multi-grained Representations", "authors": [ "Xiang Hu", "Qingyang Zhu", @@ -37101,7 +37101,7 @@ }, { "id": 17582, - "title": "Boosting Selective Rationalization with Shortcuts Discovery", + "title": "Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery", "authors": [ "Linan Yue", "Qi Liu", @@ -37179,7 +37179,7 @@ "id": 17578, "title": "BEND: Benchmarking DNA Language Models on Biologically Meaningful Tasks", "authors": [ - "Frederikke Marin", + "Frederikke Isa Marin", "Felix Teufel", "Marc Horlacher", "Dennis Madsen", @@ -37203,9 +37203,9 @@ "title": "Self-supervised Pocket Pretraining via Protein Fragment-Surroundings Alignment", "authors": [ "Bowen Gao", - "Yinjun JIA", - "Yuanle Mo", - "yuyan ni", + "Yinjun Jia", + "YuanLe Mo", + "Yuyan Ni", "Wei-Ying Ma", "Zhi-Ming Ma", "Yanyan Lan" @@ -37248,7 +37248,7 @@ "Beomsu Kim", "Gihyun Kwon", "Kwanyoung Kim", - "Jong Ye" + "Jong Chul Ye" ], "abstract": "Diffusion models are a powerful class of generative models which simulate stochastic differential equations (SDEs) to generate data from noise. Although diffusion models have achieved remarkable progress in recent years, they have limitations in the unpaired image-to-image translation tasks due to the Gaussian prior assumption. Schr\u00f6dinger Bridge (SB), which learns an SDE to translate between two arbitrary distributions, have risen as an attractive solution to this problem. However, none of SB models so far have been successful at unpaired translation between high-resolution images. In this work, we propose the Unpaired Neural Schr\u00f6dinger Bridge (UNSB), which expresses SB problem as a sequence of adversarial learning problems. This allows us to incorporate advanced discriminators and regularization to learn a SB between unpaired data. We demonstrate that UNSB is scalable and successfully solves various unpaired image-to-image translation tasks.", "type": "Poster", @@ -37344,7 +37344,7 @@ "authors": [ "T Mitchell Roddenberry", "Vishwanath Saragadam", - "Maarten V de Hoop", + "Maarten V. de Hoop", "Richard Baraniuk" ], "abstract": "Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains. By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples spatial and spectral features of the signal that is not obvious in the usual discrete representation, paving the way for continuous signal processing and machine learning approaches that were not previously possible. Although INRs using sinusoidal activation functions have been studied in terms of Fourier theory, recent works have shown the advantage of using wavelets instead of sinusoids as activation functions, due to their ability to simultaneously localize in both frequency and space. In this work, we approach such INRs and demonstrate how they resolve high-frequency features of signals from coarse approximations done in the first layer of the MLP. This leads to multiple prescriptions for the design of INR architectures, including the use of complex wavelets, decoupling of low and band-pass approximations, and initialization schemes based on the singularities of the desired signal.", @@ -37444,7 +37444,7 @@ "id": 17561, "title": "LLMs Meet VLMs: Boost Open Vocabulary Object Detection with Fine-grained Descriptors", "authors": [ - "Sheng JIn", + "Sheng Jin", "Xueying Jiang", "Jiaxing Huang", "Lewei Lu", @@ -37463,7 +37463,7 @@ "id": 17560, "title": "Beyond Spatio-Temporal Representations: Evolving Fourier Transform for Temporal Graphs", "authors": [ - "Anson Simon Bastos", + "Anson Bastos", "Kuldeep Singh", "Abhishek Nadgeri", "Manish Singh", @@ -37501,7 +37501,7 @@ "authors": [ "Ilan Price", "Nicholas Daultry Ball", - "Adam Jones", + "Adam Christopher Jones", "Samuel Chun Hei Lam", "Jared Tanner" ], @@ -37569,11 +37569,11 @@ "id": 17554, "title": "Towards Characterizing Domain Counterfactuals for Invertible Latent Causal Models", "authors": [ - "Sean Kulinski", "Zeyu Zhou", "Ruqi Bai", + "Sean Kulinski", "Murat Kocaoglu", - "David Inouye" + "David I. Inouye" ], "abstract": "Answering counterfactual queries has many important applications such as knowledge discovery and explainability, but is challenging when causal variables are unobserved and we only see a projection onto an observation space, for instance, image pixels. One approach is to recover the latent Structural Causal Model (SCM), but this typically needs unrealistic assumptions, such as linearity of the causal mechanisms. Another approach is to use naïve ML approximations, such as generative models, to generate counterfactual samples; however, these lack guarantees of accuracy. In this work, we strive to strike a balance between practicality and theoretical guarantees by focusing on a specific type of causal query called *domain counterfactuals*, which hypothesizes what a sample would have looked like if it had been generated in a different domain (or environment). Concretely, by only assuming invertibility, sparse domain interventions and access to observational data from different domains, we aim to improve domain counterfactual estimation both theoretically and practically with less restrictive assumptions. We define *domain counterfactually equivalent* models and prove necessary and sufficient properties for equivalent models that provide a tight characterization of the domain counterfactual equivalence classes. Building upon this result, we prove that every equivalence class contains a model where all intervened variables are at the end when topologically sorted by the causal DAG, i.e., all non-intervened variables have non-intervened ancestors. This surprising result suggests that a model design that only allows intervention in the last $k$ latent variables may improve model estimation for counterfactuals. We then test this model design on extensive simulated and image-based experiments which show the sparse canonical model indeed improves counterfactual estimation over baseline non-sparse models.", "type": "Poster", @@ -37591,7 +37591,7 @@ "Kevin Yang", "Dan Klein", "Asli Celikyilmaz", - "Nanyun (Violet) Peng", + "Nanyun Peng", "Yuandong Tian" ], "abstract": "We propose Reinforcement Learning from Contrastive Distillation (RLCD), a method for aligning language models to follow principles expressed in natural language (e.g., to be more harmless) without using human feedback. RLCD creates preference pairs from two contrasting model outputs, one using a positive prompt designed to encourage following the given principles, and one using a negative prompt designed to encourage violating them. Using two different prompts causes model outputs to be more differentiated on average, resulting in cleaner preference labels in the absence of human annotations. We then use the preference pairs to train a preference model, which is in turn used to improve a base unaligned language model via reinforcement learning. Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context distillation (Huang et al., 2022) baselines across three diverse alignment tasks\u2014harmlessness, helpfulness, and story outline generation\u2014and when using both 7B and 30B model scales for simulating preference data", @@ -37608,7 +37608,7 @@ "title": "Faster Sampling from Log-Concave Densities over Polytopes via Efficient Linear Solvers", "authors": [ "Oren Mangoubi", - "Nisheeth Vishnoi" + "Nisheeth K. Vishnoi" ], "abstract": "We consider the problem of sampling from a logconcave distribution $\\pi(\\theta) \\propto e^{-f(\\theta)}$ constrained to a polytope $K:=${$\\theta \\in \\mathbb{R}^d: A\\theta \\leq b$}, where $A\\in \\mathbb{R}^{m\\times d}$ and $b \\in \\mathbb{R}^m$. The fastest-known algorithm for the setting when $f$ is $O(1)$-Lipschitz or $O(1)$-smooth runs in roughly $O(md \\times md^{\\omega -1})$ arithmetic operations, where the $md^{\\omega -1}$ term arises because each Markov chain step requires computing a matrix inversion and determinant ($\\omega \\approx 2.37$ is the matrix multiplication constant). We present a nearly-optimal implementation of this Markov chain with per-step complexity that is roughly the number of non-zero entries of $A$ while the number of Markov chain steps remains the same. The key technical ingredients are 1) to show that the matrices that arise in this Dikin walk change slowly, 2) to deploy efficient linear solvers which can leverage this slow change to speed up matrix inversion by using information computed in previous steps, and 3) to speed up the computation of the determinantal term in the Metropolis filter step via a randomized Taylor series-based estimator. This result directly improves the runtime for applications that involve sampling from Gibbs distributions constrained to polytopes that arise in Bayesian statistics and private optimization.", "type": "Poster", @@ -37637,7 +37637,7 @@ }, { "id": 17547, - "title": "Understanding AI Cognition: A Neural Module for Inference Inspired by Human Memory Mechanisms", + "title": "A Framework for Inference Inspired by Human Memory Mechanisms", "authors": [ "Xiangyu Zeng", "Jie Lin", @@ -37704,7 +37704,7 @@ "Linghan Xu", "Jun Fang", "Qingming Tang", - "Yingnian Wu", + "Ying Nian Wu", "Joseph Tighe", "Yifan Xing" ], @@ -37723,7 +37723,7 @@ "authors": [ "Yue Deng", "Wenxuan Zhang", - "Sinno Pan", + "Sinno Jialin Pan", "Lidong Bing" ], "abstract": "While large language models (LLMs) exhibit remarkable capabilities across a wide range of tasks, they pose potential safety concerns, such as the ``jailbreak'' problem. Although several preventive measures have been developed to mitigate the potential risks associated with LLMs, they have primarily focused on English data. In this study, we reveal the presence of multilingual jailbreak challenges within LLMs and consider two potential risky scenarios: unintentional and intentional. The unintentional scenario involves users querying LLMs using non-English prompts and inadvertently bypassing the safety mechanisms, while the intentional scenario entails malicious users combining jailbreak instructions with multilingual prompts to attack LLMs deliberately. The experimental results reveal that in the unintentional scenario, the rate of unsafe content increases as the availability of languages decreases. Specifically, low-resource languages exhibit three times the likelihood of encountering harmful content compared to high-resource languages, with both ChatGPT and GPT-4. In the intentional scenario, multilingual prompts can exacerbate the negative impact of jailbreak instructions, with astonishingly high rates of unsafe output: 80.92\\% for ChatGPT and 40.71\\% for GPT-4. Finally, we propose a novel \\textsc{Self-Defense} framework that addresses the multilingual jailbreak challenges via automatically generating multilingual safety training data for fine-tuning. Experiment results demonstrate its effectiveness with notable reduction in unsafe rate.", @@ -37758,11 +37758,11 @@ "id": 17541, "title": "Skip-Attention: Improving Vision Transformers by Paying Less Attention", "authors": [ - "Shashank Venkataramanan", + "Shashanka Venkataramanan", "Amir Ghodrati", - "Yuki Asano", + "Yuki M Asano", "Fatih Porikli", - "Amirhossein Habibian" + "Amir Habibian" ], "abstract": "This work aims to improve the efficiency of vision transformers (ViTs). While ViTs use computationally expensive self-attention operations in every layer, we identify that these operations are highly correlated across layers -- a key redundancy that causes unnecessary computations. Based on this observation, we propose SkipAT a method to reuse self-attention computation from preceding layers to approximate attention at one or more subsequent layers. To ensure that reusing self-attention blocks across layers does not degrade the performance, we introduce a simple parametric function, which outperforms the baseline transformer's performance while running computationally faster. We show that SkipAT is agnostic to transformer architecture and is effective in image classification, semantic segmentation on ADE20K, image denoising on SIDD, and video denoising on DAVIS. We achieve improved throughput at the same-or-higher accuracy levels in all these tasks.", "type": "Poster", @@ -37860,7 +37860,7 @@ "Jingfeng Wu", "Difan Zou", "Zixiang Chen", - "vladimir braverman", + "Vladimir Braverman", "Quanquan Gu", "Peter Bartlett" ], @@ -37916,7 +37916,7 @@ "title": "SliceGPT: Compress Large Language Models by Deleting Rows and Columns", "authors": [ "Saleh Ashkboos", - "Maximilian Croci", + "Maximilian L. Croci", "Marcelo Gennari do Nascimento", "Torsten Hoefler", "James Hensman" @@ -37973,11 +37973,11 @@ "id": 17528, "title": "Understanding the Robustness of Randomized Feature Defense Against Query-Based Adversarial Attacks", "authors": [ - "Hung Quang Nguyen", + "Nguyen Hung-Quang", "Yingjie Lao", "Tung Pham", "Kok-Seng Wong", - "Khoa Doan" + "Khoa D Doan" ], "abstract": "Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify. Even with access only to the model's output, an attacker can employ black-box attacks to generate such adversarial examples. In this work, we propose a simple and lightweight defense against black-box attacks by adding random noise to hidden features at intermediate layers of the model at inference time. Our theoretical analysis confirms that this method effectively enhances the model's resilience against both score-based and decision-based black-box attacks. Importantly, our defense does not necessitate adversarial training and has minimal impact on accuracy, rendering it applicable to any pre-trained model. Our analysis also reveals the significance of selectively adding noise to different parts of the model based on the gradient of the adversarial objective function, which can be varied during the attack. We demonstrate the robustness of our defense against multiple black-box attacks through extensive empirical experiments involving diverse models with various architectures.", "type": "Poster", @@ -37992,10 +37992,10 @@ "id": 17527, "title": "Learning Multi-Agent Communication with Contrastive Learning", "authors": [ - "Yat Long (Richie) Lo", + "Yat Long Lo", "Biswa Sengupta", - "Jakob Foerster", - "Mikhail Noukhovitch" + "Jakob Nicolaus Foerster", + "Michael Noukhovitch" ], "abstract": "Communication is a powerful tool for coordination in multi-agent RL. But inducing an effective, common language is a difficult challenge, particularly in the decentralized setting. In this work, we introduce an alternative perspective where communicative messages sent between agents are considered as different incomplete views of the environment state. By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning to maximize the mutual information between messages of a given trajectory. In communication-essential environments, our method outperforms previous work in both performance and learning speed. Using qualitative metrics and representation probing, we show that our method induces more symmetric communication and captures global state information from the environment. Overall, we show the power of contrastive learning and the importance of leveraging messages as encodings for effective communication.", "type": "Poster", @@ -38024,14 +38024,14 @@ }, { "id": 17525, - "title": "Towards Robust and Efficient Cloud-Edge Model Adaptation via Selective Entropy Distillation", + "title": "Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation", "authors": [ "Yaofo Chen", "Shuaicheng Niu", "Shoukai Xu", "Hengjie Song", - "Mingkui Tan", - "Yaowei Wang" + "Yaowei Wang", + "Mingkui Tan" ], "abstract": "The conventional deep learning paradigm often involves training a deep model on a server and then deploying the model or its distilled ones to resource-limited edge devices. Usually, the models shall remain fixed once deployed (at least for some period) due to the potential high cost of model adaptation for both the server and edge sides. However, in many real-world scenarios, the test environments may change dynamically (known as distribution shifts), which often results in degraded performance. Thus, one has to adapt the edge models promptly to attain promising performance. Moreover, with the increasing data collected at the edge, this paradigm also fails to further adapt the cloud model for better performance. To address these, we encounter two primary challenges: 1) the edge model has limited computation power and may only support forward propagation; 2) the data transmission budget between cloud and edge devices is limited in latency-sensitive scenarios. In this paper, we establish a Cloud-Edge Model Adaptation (CEMA) paradigm in which the edge models only need to perform forward propagation and the edge models can be adapted online. In our CEMA, to reduce the communication burden, we devise two criteria to exclude unnecessary samples from uploading to the cloud, i.e., dynamic unreliable and low-informative sample exclusion. Based on the uploaded samples, we update and distribute the affine parameters of normalization layers by distilling from the stronger foundation model to the edge model with a sample replay strategy. Extensive experimental results on ImageNet-C and ImageNet-R verify the effectiveness of our CEMA.", "type": "Poster", @@ -38044,10 +38044,10 @@ }, { "id": 17523, - "title": "Universal Graph Random Features", + "title": "General Graph Random Features", "authors": [ "Isaac Reid", - "Krzysztof Choromanski", + "Krzysztof Marcin Choromanski", "Eli Berger", "Adrian Weller" ], @@ -38062,7 +38062,7 @@ }, { "id": 17522, - "title": "Alice Benchmarks: Connecting Real World Object Re-Identification with the Synthetic", + "title": "Alice Benchmarks: Connecting Real World Re-Identification with the Synthetic", "authors": [ "Xiaoxiao Sun", "Yue Yao", @@ -38083,8 +38083,8 @@ "id": 17520, "title": "ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis", "authors": [ - "DongHao Luo", - "Xue Wang" + "Luo donghao", + "wang xue" ], "abstract": "Recently, Transformer-based and MLP-based models have emerged rapidly and won dominance in time series analysis. In contrast, convolution is losing steam in time series tasks nowadays for inferior performance. This paper studies the open question of how to better use convolution in time series analysis and makes efforts to bring convolution back to the arena of time series analysis. To this end, we modernize the traditional TCN and conduct time series related modifications to make it more suitable for time series tasks. As the outcome, we propose ModernTCN and successfully solve this open question through a seldom-explored way in time series community. As a pure convolution structure, ModernTCN still achieves the consistent state-of-the-art performance on five mainstream time series analysis tasks (long-term and short-term forecasting, imputation, classification and anomaly detection) while maintaining the efficiency advantage of convolution-based models, therefore providing a better balance of efficiency and performance than state-of-the-art Transformer-based and MLP-based models. Our study further reveals that, compared with previous convolution-based models, our ModernTCN has much larger effective receptive fields (ERFs), therefore can better unleash the potential of convolution in time series analysis. The code will be publicly available.", "type": "Spotlight Poster", @@ -38116,7 +38116,7 @@ }, { "id": 17519, - "title": "Decision Transformer is a Robust Contender for Offline Reinforcement Learning", + "title": "When should we prefer Decision Transformers for Offline Reinforcement Learning?", "authors": [ "Prajjwal Bhargava", "Rohan Chitnis", @@ -38140,11 +38140,11 @@ "Zhijing Jin", "Jiarui Liu", "Zhiheng LYU", - "spencer poff", + "Spencer Poff", "Mrinmaya Sachan", "Rada Mihalcea", - "Mona Diab", - "Bernhard Schoelkopf" + "Mona T. Diab", + "Bernhard Sch\u00f6lkopf" ], "abstract": "Causal inference is one of the hallmarks of human intelligence. While the field of CausalNLP has attracted much interest in the recent years, existing causal inference datasets in NLP primarily rely on discovering causality from empirical knowledge (e.g. commonsense knowledge). In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs). Specifically, we formulate a novel task Corr2Cause, which takes a set of correlational statements and determines the causal relationship between the variables. We curate a large-scale dataset of more than 400K samples, on which we evaluate seventeen existing LLMs. Through our experiments, we identify a key shortcoming of LLMs in terms of their causal inference skills, and show that these models achieve almost close to random performance on the task. This shortcoming is somewhat mitigated when we try to re-purpose LLMs for this skill via finetuning, but we find that these models still fail to generalize \u2013 they can only perform causal inference in in-distribution settings when variable names and textual expressions used in the queries are similar to those in the training set, but fail in out-of-distribution settings generated by perturbing these queries. Corr2Cause is a challenging task for LLMs, and would be helpful in guiding future research on improving LLMs\u2019 pure reasoning ability and generalizability.", "type": "Poster", @@ -38159,7 +38159,7 @@ }, { "id": 17517, - "title": "LLM4QPE: Unsupervised Pretraining of Quantum Property Estimation and A Benchmark", + "title": "Towards LLM4QPE: Unsupervised Pretraining of Quantum Property Estimation and A Benchmark", "authors": [ "Yehui Tang", "Hao Xiong", @@ -38200,7 +38200,7 @@ "authors": [ "Tanishq Kumar", "Blake Bordelon", - "Samuel Gershman", + "Samuel J. Gershman", "Cengiz Pehlevan" ], "abstract": "We propose that the grokking phenomenon, where the train loss of a neural network decreases much earlier than its test loss, can arise due to a neural network transitioning from lazy training dynamics to a rich, feature learning regime. To illustrate this mechanism, we study the simple setting of vanilla gradient descent on a polynomial regression problem with a two layer neural network which exhibits grokking without regularization in a way that cannot be explained by existing theories. We identify sufficient statistics for the test loss of such a network, and tracking these over training reveals that grokking arises in this setting when the network first attempts to fit a kernel regression solution with its initial features, followed by late-time feature learning where a generalizing solution is identified after train loss is already low. We find that the key determinants of grokking are the rate of feature learning---which can be controlled precisely by parameters that scale the network output---and the alignment of the initial features with the target function $y(x)$. We argue this delayed generalization arises when (1) the top eigenvectors of the initial neural tangent kernel and the task labels $y(x)$ are misaligned, but (2) the dataset size is large enough so that it is possible for the network to generalize eventually, but not so large that train loss perfectly tracks test loss at all epochs, and (3) the network begins training in the lazy regime so does not learn features immediately. We conclude with evidence that this transition from lazy (linear model) to rich training (feature learning) can control grokking in more general settings, like on MNIST, one-layer Transformers, and student-teacher networks.", @@ -38218,7 +38218,7 @@ "authors": [ "Florian Gr\u00f6tschla", "Jo\u00ebl Mathys", - "R\u00f3bert Veres", + "Robert Veres", "Roger Wattenhofer" ], "abstract": "Graph Visualization, also known as Graph Drawing, aims to find geometric embeddings of graphs that optimize certain criteria. Stress is a widely used metric; stress is minimized when every pair of nodes is positioned at their shortest path distance. However, stress optimization presents computational challenges due to its inherent complexity and is usually solved using heuristics in practice. We introduce a scalable Graph Neural Network (GNN) based Graph Drawing framework with sub-quadratic runtime that can learn to optimize stress. Inspired by classical stress optimization techniques and force-directed layout algorithms, we create a coarsening hierarchy for the input graph. Beginning at the coarsest level, we iteratively refine and un-coarsen the layout, until we generate an embedding for the original graph. To enhance information propagation within the network, we propose a novel positional rewiring technique based on intermediate node positions. Our empirical evaluation demonstrates that the framework achieves state-of-the-art performance while remaining scalable.", @@ -38234,7 +38234,7 @@ "id": 17513, "title": "GRAPH-CONSTRAINED DIFFUSION FOR END-TO-END PATH PLANNING", "authors": [ - "DINGYUAN SHI", + "Dingyuan Shi", "Yongxin Tong", "Zimu Zhou", "Ke Xu", @@ -38278,7 +38278,7 @@ "id": 17511, "title": "Are Models Biased on Text without Gender-related Language?", "authors": [ - "Catarina Bel\u00e9m", + "Catarina G Bel\u00e9m", "Preethi Seshadri", "Yasaman Razeghi", "Sameer Singh" @@ -38296,10 +38296,10 @@ "id": 17510, "title": "MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training", "authors": [ - "Yizhi Li", + "Yizhi LI", "Ruibin Yuan", "Ge Zhang", - "Yinghao MA", + "Yinghao Ma", "Xingran Chen", "Hanzhi Yin", "Chenghao Xiao", @@ -38313,7 +38313,7 @@ "Gus Xia", "Yemin Shi", "Wenhao Huang", - "zili wang", + "Zili Wang", "Yike Guo", "Jie Fu" ], @@ -38330,7 +38330,7 @@ }, { "id": 17509, - "title": "SuRe: Improving Open-domain Question Answering of LLMs via Summarized Retrieval", + "title": "SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs", "authors": [ "Jaehyung Kim", "Jaehyun Nam", @@ -38391,7 +38391,7 @@ "authors": [ "Junoh Lee", "Hyunjun Jung", - "Jinhwi Park", + "Jin-Hwi Park", "Inhwan Bae", "Hae-Gon Jeon" ], @@ -38430,9 +38430,9 @@ "id": 17503, "title": "Remote Sensing Vision-Language Foundation Models without Annotations via Ground Remote Alignment", "authors": [ - "Utkarsh Kumar Mall", + "Utkarsh Mall", "Cheng Perng Phoo", - "Meilin Liu", + "Meilin Kelsey Liu", "Carl Vondrick", "Bharath Hariharan", "Kavita Bala" @@ -38524,7 +38524,7 @@ "title": "BarLeRIa: An Efficient Tuning Framework for Referring Image Segmentation", "authors": [ "Yaoming Wang", - "Li Jin", + "Jin Li", "XIAOPENG ZHANG", "Bowen Shi", "Chenglin Li", @@ -38547,7 +38547,7 @@ "authors": [ "Diyang Li", "Charles Ling", - "Zhiqiang Xu", + "zhiqiang xu", "Huan Xiong", "Bin Gu" ], @@ -38564,9 +38564,9 @@ "id": 17495, "title": "STARC: A General Framework For Quantifying Differences Between Reward Functions", "authors": [ - "Joar Skalse", + "Joar Max Viktor Skalse", "Lucy Farnik", - "Sumeet Motwani", + "Sumeet Ramesh Motwani", "Erik Jenner", "Adam Gleave", "Alessandro Abate" @@ -38605,8 +38605,8 @@ "id": 17493, "title": "MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo", "authors": [ - "chenjie cao", - "xinlin ren", + "Chenjie Cao", + "Xinlin Ren", "Yanwei Fu" ], "abstract": "Recent advancements in learning-based Multi-View Stereo (MVS) methods have prominently featured transformer-based models with attention mechanisms. However, existing approaches have not thoroughly investigated the profound influence of transformers on different MVS modules, resulting in limited depth estimation capabilities. In this paper, we introduce MVSFormer++, a method that prudently maximizes the inherent characteristics of attention to enhance various components of the MVS pipeline. Formally, our approach involves infusing cross-view information into the pre-trained DINOv2 model to facilitate MVS learning. Furthermore, we employ different attention mechanisms for the feature encoder and cost volume regularization, focusing on feature and spatial aggregations respectively. Additionally, we uncover that some design details would substantially impact the performance of transformer modules in MVS, including normalized 3D positional encoding, adaptive attention scaling, and the position of layer normalization. Comprehensive experiments on DTU, Tanks-and-Temples, BlendedMVS, and ETH3D validate the effectiveness of the proposed method. Notably, MVSFormer++ achieves state-of-the-art performance on the challenging DTU and Tanks-and-Temples benchmarks. Codes and models are available at https://github.com/maybeLx/MVSFormerPlusPlus.", @@ -38704,7 +38704,7 @@ "Benjamin Eysenbach", "Tuomas Sandholm", "Furong Huang", - "Stephen McAleer" + "Stephen Marcus McAleer" ], "abstract": "Deploying reinforcement learning (RL) systems requires robustness to uncertainty and model misspecification, yet prior robust RL methods typically only study noise introduced independently across time. However, practical sources of uncertainty are usually coupled across time.We formally introduce temporally-coupled perturbations, presenting a novel challenge for existing robust RL methods. To tackle this challenge, we propose GRAD, a novel game-theoretic approach that treats the temporally-coupled robust RL problem as a partially-observable two-player zero-sum game. By finding an approximate equilibrium within this game, GRAD optimizes for general robustness against temporally-coupled perturbations. Experiments on continuous control tasks demonstrate that, compared with prior methods, our approach achieves a higher degree of robustness to various types of attacks on different attack domains, both in settings with temporally-coupled perturbations and decoupled perturbations.", "type": "Poster", @@ -38755,7 +38755,7 @@ }, { "id": 17485, - "title": "Robust Classification via Regression-Based Loss Reweighting and Label Correction", + "title": "Robust Classification via Regression for Learning with Noisy Labels", "authors": [ "Erik Englesson", "Hossein Azizpour" @@ -38793,10 +38793,10 @@ "id": 17484, "title": "Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation", "authors": [ - "Shih-Ying Yeh", + "SHIH-YING YEH", "Yu-Guan Hsieh", "Zhidong Gao", - "Bernard Yang", + "Bernard B W Yang", "Giyeong Oh", "Yanmin Gong" ], @@ -38816,14 +38816,14 @@ "title": "$\\texttt{NAISR}$: A 3D Neural Additive Model for Interpretable Shape Representation", "authors": [ "Yining Jiao", - "Carlton ZDANSKI", - "Julia Kimbell", + "Carlton Jude ZDANSKI", + "Julia S Kimbell", "Andrew Prince", - "Cameron Worden", + "Cameron P Worden", "Samuel Kirse", "Christopher Rutter", "Benjamin Shields", - "William Dunn", + "William Alexander Dunn", "Jisan Mahmud", "Marc Niethammer" ], @@ -38875,12 +38875,12 @@ "title": "Scalable Diffusion for Materials Generation", "authors": [ "Sherry Yang", - "Kwanghwan Cho", + "KwangHwan Cho", "Amil Merchant", "Pieter Abbeel", "Dale Schuurmans", "Igor Mordatch", - "Ekin Cubuk" + "Ekin Dogus Cubuk" ], "abstract": "Generative models trained on internet-scale data are capable of generating novel and realistic texts, images, and videos. A natural next question is whether these models can advance science, for example by generating novel stable materials. Traditionally, models with explicit structures (e.g., graphs) have been used in modeling structural relationships in scientific data (e.g., atoms and bonds in crystals), but generating structures can be difficult to scale to large and complex systems. Another challenge in generating materials is the mismatch between standard generative modeling metrics and downstream applications. For instance, common metrics such as the reconstruction error do not correlate well with the downstream goal of discovering novel stable materials. In this work, we tackle the scalability challenge by developing a unified crystal representation that can represent any crystal structure (UniMat), followed by training a diffusion probabilistic model on these UniMat representations. Our empirical results suggest that despite the lack of explicit structure modeling, UniMat can generate high fidelity crystal structures from larger and more complex chemical systems, outperforming previous graph-based approaches under various generative modeling metrics. To better connect the generation quality of materials to downstream applications, such as discovering novel stable materials, we propose additional metrics for evaluating generative models of materials, including per-composition formation energy and stability with respect to convex hulls through decomposition energy from Density Function Theory (DFT). Lastly, we show that conditional generation with UniMat can scale to previously established crystal datasets with up to millions of crystals structures, outperforming random structure search (the current leading method for structure discovery) in discovering new stable materials.", "type": "Poster", @@ -38989,9 +38989,9 @@ }, { "id": 17473, - "title": "Urial: Aligning Untuned LLMs with Just the 'Write' Amount of In-Context Learning", + "title": "The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning", "authors": [ - "Yuchen Lin", + "Bill Yuchen Lin", "Abhilasha Ravichander", "Ximing Lu", "Nouha Dziri", @@ -39011,7 +39011,7 @@ }, { "id": 17472, - "title": "ADoPD: A Large-Scale Document Page Decomposition Dataset", + "title": "ADOPD: A Large-Scale Document Page Decomposition Dataset", "authors": [ "Jiuxiang Gu", "Xiangxi Shi", @@ -39037,7 +39037,7 @@ "authors": [ "Tong Wu", "Ashwinee Panda", - "Jiachen (Tianhao) Wang", + "Jiachen T. Wang", "Prateek Mittal" ], "abstract": "In-context learning (ICL) is an important capability of Large Language Models (LLMs), enabling these models to dynamically adapt based on specific, in-context exemplars, thereby improving accuracy and relevance.However, LLM's responses may leak the sensitive private information contained in in-context exemplars. To address this challenge, we propose Differentially Private In-context Learning (DP-ICL), a general paradigm for privatizing ICL tasks. The key idea for DP-ICL paradigm is generating differentially private responses through a noisy consensus among an ensemble of LLM's responses based on disjoint exemplar sets. Based on the general paradigm of DP-ICL, we instantiate several techniques showing how to privatize ICL for text classification and language generation. We experiment on four text classification benchmarks and two language generation tasks, and our empirical findings suggest that our DP-ICL achieves a strong utility-privacy tradeoff.", @@ -39078,7 +39078,7 @@ "Arna Ghosh", "Gauthier Gidel", "Guillaume Lajoie", - "Blake A Richards" + "Blake Aaron Richards" ], "abstract": "A growing literature in computational neuroscience leverages gradient descent and learning algorithms that approximate it to study synaptic plasticity in the brain. However, the vast majority of this work ignores a critical underlying assumption: the choice of distance for synaptic changes (i.e. the geometry of synaptic plasticity). Gradient descent assumes that the distance is Euclidean, but many other distances are possible, and there is no reason that biology necessarily uses Euclidean geometry. Here, using the theoretical tools provided by mirror descent, we show that, regardless of the loss being minimized, the distribution of synaptic weights will depend on the geometry of synaptic plasticity. We use these results to show that experimentally-observed log-normal weight distributions found in several brain areas are not consistent with standard gradient descent (i.e. a Euclidean geometry), but rather with non-Euclidean distances. Finally, we show that it should be possible to experimentally test for different synaptic geometries by comparing synaptic weight distributions before and after learning. Overall, this work shows that the current paradigm in theoretical work on synaptic plasticity that assumes Euclidean synaptic geometry may be misguided and that it should be possible to experimentally determine the true geometry of synaptic plasticity in the brain.", "type": "Spotlight Poster", @@ -39134,7 +39134,7 @@ }, { "id": 17465, - "title": "On the Stability of Expressive Positional Encodings for Graph Neural Networks", + "title": "On the Stability of Expressive Positional Encodings for Graphs", "authors": [ "Yinan Huang", "William Lu", @@ -39179,8 +39179,8 @@ "title": "Navigating Dataset Documentations in AI: A Large-Scale Analysis of Dataset Cards on HuggingFace", "authors": [ "Xinyu Yang", - "Victor Weixin Liang", - "James Y Zou" + "Weixin Liang", + "James Zou" ], "abstract": "Advances in machine learning are closely tied to the creation of datasets. While data documentation is widely recognized as essential to the reliability, reproducibility, and transparency of ML, we lack a systematic empirical understanding of current dataset documentation practices. To shed light on this question, here we take Hugging Face - one of the largest platforms for sharing and collaborating on ML models and datasets - as a prominent case study. By analyzing all 7,433 dataset documentation on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices, yielding 5 main findings: (1) The dataset card completion rate shows marked heterogeneity correlated with dataset popularity: While 86.0\\% of the top 100 downloaded dataset cards fill out all sections suggested by Hugging Face community, only 7.9\\% of dataset cards with no downloads complete all these sections. (2) A granular examination of each section within the dataset card reveals that the practitioners seem to prioritize Dataset Description and Dataset Structure sections, accounting for 36.2\\% and 33.6\\% of the total card length, respectively, for the most downloaded datasets. In contrast, the Considerations for Using the Data section receives the lowest proportion of content, accounting for just 2.1\\% of the text. (3) By analyzing the subsections within each section and utilizing topic modeling to identify key topics, we uncover what is discussed in each section, and underscore significant themes encompassing both technical and social impacts, as well as limitations within the Considerations for Using the Data section. (4) Our findings also highlight the need for improved accessibility and reproducibility of datasets in the Usage sections. (5) In addition, our human annotation evaluation emphasizes the pivotal role of comprehensive dataset content in shaping individuals' perceptions of a dataset card's overall quality. Overall, our study offers a unique perspective on analyzing dataset documentation through large-scale data science analysis and underlines the need for more thorough dataset documentation in machine learning research.", "type": "Poster", @@ -39238,7 +39238,7 @@ "authors": [ "Nuoya Xiong", "Lijun Ding", - "Simon Du" + "Simon Shaolei Du" ], "abstract": "This paper rigorously shows how over-parameterization dramatically changes the convergence behaviors of gradient descent (GD) for the matrix sensing problem, where the goal is to recover an unknown low-rank ground-truth matrix from near-isotropic linear measurements.First, we consider the symmetric setting with the symmetric parameterization where $M^* \\in \\mathbb{R}^{n \\times n}$ is a positive semi-definite unknown matrix of rank $r \\ll n$, and one uses a symmetric parameterization $XX^\\top$ to learn $M^*$. Here $X \\in \\mathbb{R}^{n \\times k}$ with $k > r$ is the factor matrix. We give a novel $\\Omega\\left(1/T^2\\right)$ lower bound of randomly initialized GD for the over-parameterized case ($k >r$) where $T$ is the number of iterations. This is in stark contrast to the exact-parameterization scenario ($k=r$) where the convergence rate is $\\exp\\left(-\\Omega\\left(T\\right)\\right)$. Next, we study asymmetric setting where $M^* \\in \\mathbb{R}^{n_1 \\times n_2}$ is the unknown matrix of rank $r \\ll \\min\\{n_1,n_2\\}$, and one uses an asymmetric parameterization $FG^\\top$ to learn $M^*$ where $F \\in \\mathbb{R}^{n_1 \\times k}$ and $G \\in \\mathbb{R}^{n_2 \\times k}$. We give the first global exact convergence result of randomly initialized GD for the exact-parameterization case ($k=r$) with an $\\exp\\left(-\\Omega\\left(T\\right)\\right)$ rate. Furthermore, we give the first global exact convergence result for the over-parameterization case ($k>r$) with an $\\exp\\left(-\\Omega\\left(\\alpha^2 T\\right)\\right)$ rate where $\\alpha$ is the initialization scale. This linear convergence result in the over-parameterization case is especially significant because one can apply the asymmetric parameterization to the symmetric setting to speed up from $\\Omega\\left(1/T^2\\right)$ to linear convergence. Therefore, we identify a surprising phenomenon: asymmetric parameterization can exponentially speed up convergence. Equally surprising is our analysis that highlights the importance of imbalance between $F$ and $G$. This is in sharp contrast to prior works which emphasize balance. We further give an example showing the dependency on $\\alpha$ in the convergence rate is unavoidable in the worst case. On the other hand, we propose a novel method that only modifies one step of GD and obtains a convergence rate independent of $\\alpha$, recovering the rate in the exact-parameterization case. We provide empirical studies to verify our theoretical findings.", "type": "Spotlight Poster", @@ -39254,7 +39254,7 @@ "title": "Correlated Noise Provably Beats Independent Noise for Differentially Private Learning", "authors": [ "Christopher A. Choquette-Choo", - "Krishnamurthy Dvijotham", + "Krishnamurthy Dj Dvijotham", "Krishna Pillutla", "Arun Ganesh", "Thomas Steinke", @@ -39298,7 +39298,7 @@ "title": "Subtractive Mixture Models via Squaring: Representation and Learning", "authors": [ "Lorenzo Loconte", - "Aleksanteri Sladek", + "Aleksanteri Mikulus Sladek", "Stefan Mengel", "Martin Trapp", "Arno Solin", @@ -39316,7 +39316,7 @@ }, { "id": 17456, - "title": "Constrained Bi-Level Optimization: Proximal Lagrangian Value function Approach and Hessian-free Algorithm", + "title": "Constrained Bi-Level Optimization: Proximal Lagrangian Value Function Approach and Hessian-free Algorithm", "authors": [ "Wei Yao", "Chengming Yu", @@ -39352,7 +39352,7 @@ "id": 17453, "title": "Negative Label Guided OOD Detection with Pretrained Vision-Language Models", "authors": [ - "Xue JIANG", + "Xue Jiang", "Feng Liu", "Zhen Fang", "Hong Chen", @@ -39419,7 +39419,7 @@ "Rishabh Joshi", "Misha Khalman", "Mohammad Saleh", - "Peter Liu", + "Peter J Liu", "Jialu Liu" ], "abstract": "Improving the alignment of language models with human preferences remains an active research challenge. Previous approaches have primarily utilized online Reinforcement Learning from Human Feedback (RLHF). Recently, offline methods such as Sequence Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) have emerged as attractive alternatives, offering improvements in stability and scalability while maintaining competitive performance. SLiC refines its loss function using sequence pairs sampled from a supervised fine-tuned (SFT) policy, while DPO directly optimizes language models based on preference data, foregoing the need for a separate reward model. However, the maximum likelihood estimator (MLE) of the target optimal policy requires labeled preference pairs sampled from that policy. The absence of a reward model in DPO constrains its ability to sample preference pairs from the optimal policy. Meanwhile, SLiC can only sample preference pairs from the SFT policy. To address these limitations, we introduce a novel approach called Statistical Rejection Sampling Optimization (RSO) designed to source preference data from the target optimal policy using rejection sampling, enabling a more accurate estimation of the optimal policy. We also propose a unified framework that enhances the loss functions used in both SLiC and DPO from a preference modeling standpoint. Through extensive experiments across diverse tasks, we demonstrate that RSO consistently outperforms both SLiC and DPO as evaluated by both Large Language Models (LLMs) and human raters.", @@ -39436,11 +39436,11 @@ "title": "LILO: Learning Interpretable Libraries by Compressing and Documenting Code", "authors": [ "Gabriel Grand", - "Catherine Wong", - "Maddy Bowers", + "Lionel Wong", + "Matthew Bowers", "Theo X. Olausson", "Muxin Liu", - "Joshua B Tenenbaum", + "Joshua B. Tenenbaum", "Jacob Andreas" ], "abstract": "While large language models (LLMs) now excel at code generation, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs. In this paper, we introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code to build libraries tailored to particular problem domains. Computationally, library learning presents a challenging optimization problem that requires formal reasoning about program structure at scale. LILO combines LLM-guided program synthesis with recent algorithmic advances in automated refactoring from Stitch: a symbolic compression system that efficiently identifies optimal lambda abstractions across large code corpora. To make these abstractions interpretable, we introduce an auto-documentation (AutoDoc) procedure that infers natural language names and docstrings based on contextual examples of usage. In addition to improving human readability, we find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions. We evaluate LILO on three inductive program synthesis benchmarks for string editing, scene reasoning, and graphics composition. Compared to existing neural and symbolic methods\u2014including the state-of-the-art library learning algorithm DreamCoder\u2014LILO solves more complex tasks and learns richer libraries that are grounded in linguistic knowledge. In sum, LILO provides a general design pattern for human-interpretable systems that build up shared libraries of program abstractions to solve complex software problems.", @@ -39460,7 +39460,7 @@ "Roman Bushuiev", "Petr Kouba", "Anatolii Filkin", - "Mark\u00e9ta Gabrielov\u00e1", + "Marketa Gabrielova", "Michal Gabriel", "Jiri Sedlar", "Tomas Pluskal", @@ -39505,7 +39505,7 @@ "Ehsan Saleh", "Alex Schwing", "Yu-Xiong Wang", - "Martin Burke", + "Martin D. Burke", "Saurabh Sinha" ], "abstract": "Protein design, a grand challenge of the day, involves optimization on a fitness landscape, and leading methods adopt a model-based approach where a model is trained on a training set (protein sequences and fitness) and proposes candidates to explore next. These methods are challenged by sparsity of high-fitness samples in the training set, a problem that has been in the literature. A less recognized but equally important problem stems from the distribution of training samples in the design space: leading methods are not designed for scenarios where the desired optimum is in a region that is not only poorly represented in training data, but also relatively far from the highly represented low-fitness regions. We show that this problem of \u201cseparation\u201d in the design space is a significant bottleneck in existing model-based optimization tools and propose a new approach that uses a novel VAE as its search model to overcome the problem. We demonstrate its advantage over prior methods in robustly finding improved samples, regardless of the imbalance and separation between low- and high-fitness samples. Our comprehensive benchmark on real and semi-synthetic protein datasets as well as solution design for physics-informed neural networks, showcases the generality of our approach in discrete and continuous design spaces. Our implementation is available at https://anonymous.4open.science/r/PPGVAE-F83E.", @@ -39600,11 +39600,11 @@ "authors": [ "Erik Jones", "Hamid Palangi", - "Clarisse Ribeiro", + "Clarisse Sim\u00f5es Ribeiro", "Varun Chandrasekaran", "Subhabrata Mukherjee", "Arindam Mitra", - "Ahmed H Awadallah", + "Ahmed Hassan Awadallah", "Ece Kamar" ], "abstract": "Large language models (LLMs) frequently hallucinate on abstractive summarization tasks such as document-based question-answering, meeting summarization, and clinical report generation, even though all necessary information is included in context. However, optimizing to make LLMs hallucinate less is challenging, as hallucination is hard to efficiently, cheaply, and reliably evaluate at each optimization step. In this work, we show that reducing hallucination on a _synthetic task_ can also reduce hallucination on real-world downstream tasks. Our method, SynTra, first designs a synthetic task where hallucinations are easy to elicit and measure. It next optimizes the LLM's system message via prefix tuning on the synthetic task, then uses the system message on realistic, hard-to-optimize tasks. Across three realistic abstractive summarization tasks, we reduce hallucination for two 13B-parameter LLMs using supervision signal from only a synthetic retrieval task. We also find that optimizing the system message rather than the model weights can be critical; fine-tuning the entire model on the synthetic task can counterintuitively _increase_ hallucination. Overall, SynTra demonstrates that the extra flexibility of working with synthetic data can help mitigate undesired behaviors in practice.", @@ -39620,7 +39620,7 @@ "id": 17567, "title": "A Discretization Framework for Robust Contextual Stochastic Optimization", "authors": [ - "Rares Cristian", + "Rares C Cristian", "Georgia Perakis" ], "abstract": "We study contextual stochastic optimization problems. Optimization problems have uncertain parameters stemming from unknown, context-dependent, distributions. Due to the inherent uncertainty in these problems, one is often interested not only in minimizing expected cost, but also to be robust and protect against worst case scenarios. We propose a novel method that combines the learning stage with knowledge of the downstream optimization task. The method prescribes decisions which aim to maximize the likelihood that the cost is below a (user-controlled) threshold. The key idea is (1) to discretize the feasible region into subsets so that the uncertain objective function can be well approximated deterministically within each subset, and (2) devise a secondary optimization problem to prescribe decisions by integrating the individual approximations determined in step (1). We provide theoretical guarantees bounding the underlying regret of decisions proposed by our method. In addition, experimental results demonstrate that our approach is competitive in terms of average regret and yields more robust solutions than other methods proposed in the literature, including up to 20 times lower worst-case cost on a real-world electricity generation problem.", @@ -39813,7 +39813,7 @@ "title": "DreamLLM: Synergistic Multimodal Comprehension and Creation", "authors": [ "Runpei Dong", - "chunrui han", + "Chunrui Han", "Yuang Peng", "Zekun Qi", "Zheng Ge", @@ -39934,7 +39934,7 @@ "id": 17423, "title": "Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate Models", "authors": [ - "Andrew Engel", + "Andrew William Engel", "Zhichao Wang", "Natalie Frank", "Ioana Dumitriu", @@ -39997,9 +39997,9 @@ "title": "Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation", "authors": [ "Tserendorj Adiya", - "Jung Eun Lee", - "Sanghun Kim", "Jae Shin Yoon", + "JUNGEUN LEE", + "Sanghun Kim", "Hwasup Lim" ], "abstract": "We introduce a method to generate temporally coherent human animation from a single image, a video, or a random noise.This problem has been formulated as modeling of an auto-regressive generation, i.e., to regress past frames to decode future frames.However, such unidirectional generation is highly prone to motion drifting over time, generating unrealistic human animation with significant artifacts such as appearance distortion. We claim that bidirectional temporal modeling enforces temporal coherence on a generative network by largely suppressing the appearance ambiguity.To prove our claim, we design a novel human animation framework using a denoising diffusion model: a neural network learns to generate the image of a person by denoising temporal Gaussian noises whose intermediate results are cross-conditioned bidirectionally between consecutive frames. In the experiments, our method demonstrates strong performance compared to existing unidirectional approaches with realistic temporal coherence.", @@ -40057,7 +40057,7 @@ "Wenhao Zhan", "Masatoshi Uehara", "Wen Sun", - "Jason Lee" + "Jason D. Lee" ], "abstract": "Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories, rather than explicit reward signals. While PbRL has demonstrated practical success in fine-tuning language models, existing theoretical work focuses on regret minimization and fails to capture most of the practical frameworks. In this study, we fill in such a gap between theoretical PbRL and practical algorithms by proposing a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired before collecting any human feedback. Theoretical analysis demonstrates that our algorithm requires less human feedback for learning the optimal policy under preference-based models with linear parameterization and unknown transitions, compared to the existing theoretical literature. Specifically, our framework can incorporate linear and low-rank MDPs with efficient sample complexity. Additionally, we investigate reward-agnostic RL with action-based comparison feedback and introduce an efficient querying algorithm tailored to this scenario.", "type": "Spotlight Poster", @@ -40193,7 +40193,7 @@ }, { "id": 17409, - "title": "VDC: Versatile Data Cleanser for Detecting Dirty Samples via Visual-Linguistic Inconsistency", + "title": "VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models", "authors": [ "Zihao Zhu", "Mingda Zhang", @@ -40284,12 +40284,12 @@ "title": "Image Translation as Diffusion Visual Programmers", "authors": [ "Cheng Han", - "James Liang", + "James Chenhao Liang", "Qifan Wang", "MAJID RABBANI", "Sohail Dianat", "Raghuveer Rao", - "Yingnian Wu", + "Ying Nian Wu", "Dongfang Liu" ], "abstract": "We introduce the novel Diffusion Visual Programmer (DVP), a neuro-symbolic image translation framework. Our proposed DVP seamlessly embeds a condition-flexible diffusion model within the GPT architecture, orchestrating a coherent sequence of visual programs ($i.e.$, computer vision models) for various pro-symbolic steps, which span RoI identification, style transfer, and position manipulation, facilitating transparent and controllable image translation processes. Extensive experiments demonstrate DVP\u2019s remarkable performance, surpassing concurrent arts. This success can be attributed to several key features of DVP: First, DVP achieves condition-flexible translation via instance normalization, enabling the model to eliminate sensitivity caused by the manual guidance and optimally focus on textual descriptions for high-quality content generation. Second, the frame work enhances in-context reasoning by deciphering intricate high-dimensional concepts in feature spaces into more accessible low-dimensional symbols ($e.g.$, [Prompt], [RoI object]), allowing for localized, context-free editing while maintaining overall coherence. Last but not least, DVP improves systemic controllability and explainability by offering explicit symbolic representations at each programming stage, empowering users to intuitively interpret and modify results. Our research marks a substantial step towards harmonizing artificial image translation processes with cognitive intelligence, promising broader applications.", @@ -40307,9 +40307,9 @@ "authors": [ "Ilyes Batatia", "Lars Leon Schaaf", - "G\u00e1bor Cs\u00e1nyi", + "Gabor Csanyi", "Christoph Ortner", - "Felix Faber" + "Felix Andreas Faber" ], "abstract": "Graph Neural Networks (GNNs), especially message-passing neural networks (MPNNs), have emerged as powerful architectures for learning on graphs in diverse applications. However, MPNNs face challenges when modeling non-local interactions in systems such as large conjugated molecules, metals, or amorphous materials.Although Spectral GNNs and traditional neural networks such as recurrent neural networks and transformers mitigate these challenges, they often lack extensivity, adaptability, generalizability, computational efficiency, or fail to capture detailed structural relationships or symmetries in the data. To address these concerns, we introduce Matrix Function Neural Networks (MFNs), a novel architecture that parameterizes non-local interactions through analytic matrix equivariant functions. Employing resolvent expansions offers a straightforward implementation and the potential for linear scaling with system size.The MFN architecture achieves state-of-the-art performance in standard graph benchmarks, such as the ZINC and TU datasets, and is able to capture intricate non-local interactions in quantum systems. The code and the datasets will be made public.", "type": "Spotlight Poster", @@ -40459,7 +40459,7 @@ }, { "id": 17390, - "title": "Perceptual Measurements, Distances and Metrics", + "title": "Perceptual Scales Predicted by Fisher Information Metrics", "authors": [ "Jonathan Vacher", "Pascal Mamassian" @@ -40507,7 +40507,7 @@ "Jakob Buhmann", "Derek Bradley", "Otmar Hilliges", - "Romann Weber" + "Romann M. Weber" ], "abstract": "While conditional diffusion models are known to have good coverage of the data distribution, they still face limitations in output diversity, particularly when sampled with a high classifier-free guidance scale for optimal image quality or when trained on small datasets. We attribute this problem to the role of the conditioning signal in inference and offer an improved sampling strategy for diffusion models that can increase generation diversity, especially at high guidance scales, with minimal loss of sample quality. Our sampling strategy anneals the conditioning signal by adding scheduled, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition alignment. Our Condition-Annealed Diffusion Sampler (CADS) can be used with any pretrained model and sampling algorithm, and we show that it boosts the diversity of diffusion models in various conditional generation tasks. Further, using an existing pretrained diffusion model, CADS achieves a new state-of-the-art FID of 1.70 and 2.31 for class-conditional ImageNet generation at 256$\\times$256 and 512$\\times$512 respectively.", "type": "Spotlight Poster", @@ -40543,7 +40543,7 @@ "authors": [ "Yucheng Yang", "Tianyi Zhou", - "Qiang HE", + "Qiang He", "Lei Han", "Mykola Pechenizkiy", "Meng Fang" @@ -40566,7 +40566,7 @@ "Francesco Ballerini", "Allan Zhou", "Samuele Salti", - "Luigi Di Stefano" + "Luigi di Stefano" ], "abstract": "Driven by the appealing properties of neural fields for storing and communicating 3D data, the problem of directly processing them to address tasks such as classification and part segmentation has emerged and has been investigated in recent works. Early approaches employ neural fields parameterized by shared networks trained on the whole dataset, achieving good task performance but sacrificing reconstruction quality.To improve the latter, later methods focus on individual neural fields parameterized as large Multi-Layer Perceptrons (MLPs), which are, however, challenging to process due to the high dimensionality of the weight space, intrinsic weight space symmetries, and sensitivity to random initialization. Hence, results turn out significantly inferior to those achieved by processing explicit representations, e.g., point clouds or meshes.In the meantime, hybrid representations, in particular based on tri-planes, have emerged as a more effective and efficient alternative to realize neural fields, but their direct processing has not been investigated yet.In this paper, we show that the tri-plane discrete data structure encodes rich information, which can be effectively processed by standard deep-learning machinery. We define an extensive benchmark covering a diverse set of fields such as occupancy, signed/unsigned distance, and, for the first time, radiance fields. While processing a field with the same reconstruction quality, we achieve task performance far superior to frameworks that process large MLPs and, for the first time, almost on par with architectures handling explicit representations.", "type": "Poster", @@ -40618,7 +40618,7 @@ "Panagiotis Theodoropoulos", "Guan-Horng Liu", "Tianrong Chen", - "Augustinos Saravanos", + "Augustinos D Saravanos", "Evangelos Theodorou" ], "abstract": "Neural networks and neural ODEs tend to be vulnerable to adversarial attacks, rendering robust optimizers critical to curb the success of such attacks. In this regard, the key insight of this work is to interpret Neural ODE optimization as a min-max optimal control problem. More particularly, we present Game Theoretic Second-Order Neural Optimizer (GTSONO), a robust game theoretic optimizer based on the principles of min-max Differential Dynamic Programming.The proposed method exhibits significant computational benefits due to efficient matrix decompositions and provides convergence guarantees to local saddle points.Empirically, the robustness of the proposed optimizer is demonstrated through greater robust accuracy compared to benchmark optimizers when trained on clean images. Additionally, its ability to provide a performance increase when adapted to an already existing adversarial defense technique is also illustrated.Finally, the superiority of the proposed update law over its gradient based counterpart highlights the potential benefits of incorporating robust optimal control paradigms into adversarial training methods.", @@ -40674,7 +40674,7 @@ }, { "id": 17371, - "title": "Nemesis: Normalizing the soft-prompt vectors of vision-language models", + "title": "Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models", "authors": [ "Shuai Fu", "Xiequn Wang", @@ -40715,12 +40715,12 @@ "id": 17369, "title": "Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting", "authors": [ - "xinlu zhang", + "Xinlu Zhang", "Shiyang Li", "Xianjun Yang", "Chenxin Tian", "Yao Qin", - "Linda Petzold" + "Linda Ruth Petzold" ], "abstract": "Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) often underperform LLMs, emphasizing the need for methods that reduce this performance gap while alleviating privacy concerns. In this paper, we present a simple yet effective method that harnesses LLMs' medical proficiency to boost SLM performance in medical tasks under $privacy-restricted$ scenarios. Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context by simulating clinicians' thought processes. This context serves as additional input for SLMs, augmenting their decision-making capabilities. Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks, achieving up to a 22.57% increase in absolute accuracy compared to SLM fine-tuning without context, and sets new state-of-the-art results in two medical tasks within privacy-restricted scenarios. Further out-of-domain testing and experiments in two general domain datasets showcase its generalizability and broad applicability.", "type": "Poster", @@ -40736,8 +40736,8 @@ "title": "Out-of-Variable Generalisation for Discriminative Models", "authors": [ "Siyuan Guo", - "Jonas Wildberger", - "Bernhard Schoelkopf" + "Jonas Bernhard Wildberger", + "Bernhard Sch\u00f6lkopf" ], "abstract": "The ability of an agent to do well in new environments is a critical aspect of intelligence. In machine learning, this ability is known as $\\textit{strong}$ or $\\textit{out-of-distribution}$ generalization. However, merely considering differences in data distributions is inadequate for fully capturing differences between learning environments. In the present paper, we investigate $\\textit{out-of-variable}$ generalization, which pertains to an agent's generalization capabilities concerning environments with variables that were never jointly observed before. This skill closely reflects the process of animate learning: we, too, explore Nature by probing, observing, and measuring $\\textit{subsets}$ of variables at any given time. Mathematically, $\\textit{out-of-variable}$ generalization requires the efficient re-use of past marginal information, i.e., information over subsets of previously observed variables. We study this problem, focusing on prediction tasks across environments that contain overlapping, yet distinct, sets of causes. We show that after fitting a classifier, the residual distribution in one environment reveals the partial derivative of the true generating function with respect to the unobserved causal parent in that environment. We leverage this information and propose a method that exhibits non-trivial out-of-variable generalization performance when facing an overlapping, yet distinct, set of causal predictors.", "type": "Poster", @@ -40791,7 +40791,7 @@ "Haitao Yang", "Xiangru Huang", "Bo Sun", - "Chandrajit Bajaj", + "Chandrajit L. Bajaj", "Qixing Huang" ], "abstract": "This paper introduces GenCorres, a novel unsupervised joint shape matching (JSM) approach. The basic idea of GenCorres is to learn a parametric mesh generator to fit an unorganized deformable shape collection while constraining deformations between adjacent synthetic shapes to preserve geometric structures such as local rigidity and local conformality. GenCorres presents three appealing advantages over existing JSM techniques. First, GenCorres performs JSM among a synthetic shape collection whose size is much bigger than the input shapes and fully leverages the data-driven power of JSM. Second, GenCorres unifies consistent shape matching and pairwise matching (i.e., by enforcing deformation priors between adjacent synthetic shapes). Third, the generator provides a concise encoding of consistent shape correspondences. However, learning a mesh generator from an unorganized shape collection is challenging. It requires a good initial fitting to each shape and can easily get trapped by local minimums. GenCorres addresses this issue by learning an implicit generator from the input shapes, which provides intermediate shapes between two arbitrary shapes. We introduce a novel approach for computing correspondences between adjacent implicit surfaces and force the correspondences to preserve geometric structures and be cycle-consistent. Synthetic shapes of the implicit generator then guide initial fittings (i.e., via template-based deformation) for learning the mesh generator. Experimental results show that GenCorres considerably outperforms state-of-the-art JSM techniques on benchmark datasets. The synthetic shapes of GenCorres preserve local geometric features and yield competitive performance gains against state-of-the-art deformable shape generators.", @@ -40826,9 +40826,9 @@ "authors": [ "Yuan Gong", "Hongyin Luo", - "Alexander Liu", + "Alexander H. Liu", "Leonid Karlinsky", - "James R Glass" + "James R. Glass" ], "abstract": "The ability of artificial intelligence (AI) systems to perceive and comprehend audio signals is crucial for many applications. Although significant progress has been made in this area since the development of AudioSet, most existing models are designed to map audio inputs to pre-defined, discrete sound label sets. In contrast, humans possess the ability to not only classify sounds into general categories, but also to listen to the finer details of the sounds, explain the reason for the predictions, think about what the sound infers, and understand the scene and what action needs to be taken, if any. Such capabilities beyond perception are not yet present in existing audio models. On the other hand, modern large language models (LLMs) exhibit emerging reasoning ability but they lack audio perception capabilities. Therefore, we ask the question: can we build a model that has both audio perception and a reasoning ability? In this paper, we propose a new audio foundation model, called LTU (Listen, Think, and Understand). To train LTU, we created a new OpenAQA-5M dataset consisting of 1.9 million closed-ended and 3.7 million open-ended, diverse (audio, question, answer) tuples, and have used an autoregressive training framework with a perception-to-understanding curriculum. LTU demonstrates strong performance and generalization ability on conventional audio tasks such as classification and captioning. More importantly, it exhibits emerging audio reasoning and comprehension abilities that are absent in existing audio models. To the best of our knowledge, LTU is one of the first multimodal large language models that focus on general audio (rather than just speech) understanding.", "type": "Poster", @@ -40890,8 +40890,8 @@ "id": 17936, "title": "TokenFlow: Consistent Diffusion Features for Consistent Video Editing", "authors": [ - "michal geyer", - "Omer Bar Tal", + "Michal Geyer", + "Omer Bar-Tal", "Shai Bagon", "Tali Dekel" ], @@ -40911,7 +40911,7 @@ "title": "Assessing Uncertainty in Similarity Scoring: Performance & Fairness in Face Recognition", "authors": [ "Jean-R\u00e9my Conti", - "Stephan CLEMENCON" + "Stephan Cl\u00e9men\u00e7on" ], "abstract": "The ROC curve is the major tool for assessing not only the performance but also the fairness properties of a similarity scoring function. In order to draw reliable conclusions based on empirical ROC analysis, accurately evaluating the uncertainty level related to statistical versions of the ROC curves of interest is absolutely necessary, especially for applications with considerable societal impact such as Face Recognition. In this article, we prove asymptotic guarantees for empirical ROC curves of similarity functions as well as for by-product metrics useful to assess fairness. We also explain that, because the false acceptance/rejection rates are of the form of U-statistics in the case of similarity scoring, the naive bootstrap approach may jeopardize the assessment procedure. A dedicated recentering technique must be used instead. Beyond the theoretical analysis carried out, various experiments using real face image datasets provide strong empirical evidence of the practical relevance of the methods promoted here, when applied to several ROC-based measures such as popular fairness metrics.", "type": "Poster", @@ -40927,10 +40927,10 @@ "title": "CoBIT: A Contrastive Bi-directional Image-Text Generation Model", "authors": [ "Haoxuan You", - "Xiaoyue Guo", + "Mandy Guo", "Zhecan Wang", "Kai-Wei Chang", - "Jason Baldridge", + "Jason Michael Baldridge", "Jiahui Yu" ], "abstract": "The field of Vision-and-Language (VL) has witnessed a proliferation of pretrained foundation models. Current techniques typically employ only one type of training objective, whether it's (1) contrastive objectives (like CLIP), (2) image-to-text generative objectives (like PaLI), or (3) text-to-image generative objectives (like Parti). However, all these three objectives are mutually relevant and are all based on image-text pairs. Intuitively, the first two objectives can be considered as complementary projections between two modalities, and contrastive learning can preserve global alignment and generations facilitate fine-grained understanding. Inspired by this, we present a Contrastive Bi-directional Image-Text generation model (CoBIT) to first time unify the three pre-training objectives in one framework. Specifically, CoBIT employs a novel unicoder-decoder structure consisting of an image unicoder, a text unicoder, and a cross-modal decoder. The image/text unicoders can switch between encoding and decoding in different tasks, enabling flexibility and shared knowledge that benefits both image-to-text and text-to-image generations. CoBIT achieves superior performance in image understanding, image-text understanding (Retrieval, Captioning, VQA, SNLI-VE), and text-based content creation, particularly in zero-shot scenarios.", @@ -40998,7 +40998,7 @@ "id": 18063, "title": "Gradual Domain Adaptation via Gradient Flow", "authors": [ - "Zhan ZHUANG", + "Zhan Zhuang", "Yu Zhang", "Ying Wei" ], @@ -41066,7 +41066,7 @@ "id": 18149, "title": "Modulated Phase Diffusor: Content-Oriented Feature Synthesis for Detecting Unknown Objects", "authors": [ - "Aming Wu", + "Aming WU", "Cheng Deng" ], "abstract": "To promote the safe deployment of object detectors, a task of unsupervised out-of-distribution object detection (OOD-OD) is recently proposed, aiming to detect unknown objects during training without reliance on any auxiliary OOD data. To alleviate the impact of lacking OOD data, for this task, one feasible solution is to exploit the known in-distribution (ID) data to synthesize proper OOD information for supervision, which strengthens detectors' discrimination. From the frequency perspective, since the phase generally reflects the content of the input, in this paper, we explore leveraging the phase of ID features to generate expected OOD features involving different content. And a method of Modulated Phase Diffusion (MPD) is proposed, containing a shared forward and two different reverse processes. Specifically, after calculating the phase of the extracted features, to prevent the rapid loss of content in the phase, the forward process gradually performs Gaussian Average on the phase instead of adding noise. The averaged phase and original amplitude are combined to obtain the features taken as the input of the reverse process. Next, one OOD branch is defined to synthesize virtual OOD features by continually enlarging the content discrepancy between the OOD features and original ones. Meanwhile, another modulated branch is designed to generate augmented features owning a similar phase as the original features by scaling and shifting the OOD branch. Both original and augmented features are used for training, enhancing the discrimination. Experimental results on OOD-OD, incremental object detection, and open-set object detection demonstrate the superiorities of our method. The source code will be released at https://github.com/AmingWu/MPD.", @@ -41400,7 +41400,7 @@ "authors": [ "Brandon Trabucco", "Kyle Doherty", - "Max Gurinas", + "Max A Gurinas", "Ruslan Salakhutdinov" ], "abstract": "Data augmentation is one of the most prevalent tools in deep learning, underpinning many recent advances, including those from classification, generative models, and representation learning. The standard approach to data augmentation combines simple transformations like rotations and flips to generate new images from existing ones. However, these new images lack diversity along key semantic axes present in the data. Current augmentations cannot alter the high-level semantic attributes, such as animal species present in a scene, to enhance the diversity of data. We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models. Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples. We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.", @@ -41502,7 +41502,7 @@ }, { "id": 18457, - "title": "GRANDE: Gradient-Based Decision Tree Ensembles", + "title": "GRANDE: Gradient-Based Decision Tree Ensembles for Tabular Data", "authors": [ "Sascha Marton", "Stefan L\u00fcdtke", @@ -41520,7 +41520,7 @@ }, { "id": 18463, - "title": "Harnessing Overlap in Blockwise Transformers for Near-Infinite Context", + "title": "RingAttention with Blockwise Transformers for Near-Infinite Context", "authors": [ "Hao Liu", "Matei Zaharia", @@ -41572,7 +41572,7 @@ "id": 18487, "title": "Modeling Boundedly Rational Agents with Latent Inference Budgets", "authors": [ - "Athul Jacob", + "Athul Paul Jacob", "Abhishek Gupta", "Jacob Andreas" ], @@ -41587,7 +41587,7 @@ }, { "id": 18489, - "title": "A unified sampling framework for solver searching of Diffusion Probabilistic Models", + "title": "A Unified Sampling Framework for Solver Searching of Diffusion Probabilistic Models", "authors": [ "Enshu Liu", "Xuefei Ning", @@ -41665,7 +41665,7 @@ "Katherine Hermann", "Hossein Mobahi", "Thomas FEL", - "Michael Mozer" + "Michael Curtis Mozer" ], "abstract": "Deep-learning models can extract a rich assortment of features from data. Which features a model uses depends not only on *predictivity*---how reliably a feature indicates train-set labels---but also on *availability*---how easily the feature can be extracted, or leveraged, from inputs. The literature on shortcut learning has noted examples in which models privilege one feature over another, for example texture over shape and image backgrounds over foreground objects. Here, we test hypotheses about which input properties are more available to a model, and systematically study how predictivity and availability interact to shape models' feature use. We construct a minimal, explicit generative framework for synthesizing classification datasets with two latent features that vary in predictivity and in factors we hypothesize to relate to availability, and quantify a model's shortcut bias---its over-reliance on the shortcut (more available, less predictive) feature at the expense of the core (less available, more predictive) feature. We find that linear models are relatively unbiased, but introducing a single hidden layer with ReLU or Tanh units yields a bias. Our empirical findings are consistent with a theoretical account based on Neural Tangent Kernels. Finally, we study how models used in practice trade off predictivity and availability in naturalistic datasets, discovering availability manipulations which increase models' degree of shortcut bias. Taken together, these findings suggest that the propensity to learn shortcut features is a fundamental characteristic of deep nonlinear architectures warranting systematic study given its role in shaping how models solve tasks.", "type": "Spotlight Poster", @@ -41697,7 +41697,7 @@ "authors": [ "Junchi Yu", "Ran He", - "Rex Ying" + "Zhitao Ying" ], "abstract": "Large Language Models (LLMs) have achieved remarkable success in reasoning tasks with the development of prompting methods. However, existing prompting approaches cannot reuse insights of solving similar problems and suffer from accumulated errors in multi-step reasoning, since they prompt LLMs to reason \\textit{from scratch}.To address these issues, we propose \\textbf{\\textit{Thought Propagation} (TP)}, which explores the analogous problems and leverages their solutions to enhance the complex reasoning ability of LLMs.These analogous problems are related to the input one, with reusable solutions and problem-solving strategies.Thus, it is promising to propagate insights of solving previous analogous problems to inspire new problem-solving. To achieve this, TP first prompts LLMs to propose and solve a set of analogous problems that are related to the input one. Then, TP reuses the results of analogous problems to directly yield a new solution or derive a knowledge-intensive plan for execution to amend the initial solution obtained from scratch.TP is compatible with existing prompting approaches, allowing plug-and-play generalization and enhancement in a wide range of tasks without much labor in task-specific prompt engineering. Experiments across three challenging tasks demonstrate TP enjoys a substantial improvement over the baselines by an average of 12\\% absolute increase in finding the optimal solutions in Shortest-path Reasoning, 13\\% improvement of human preference in Creative Writing, and 15\\% enhancement in the task completion rate of LLM-Agent Planning.", "type": "Poster", @@ -41815,7 +41815,7 @@ "title": "Learning From Simplicial Data Based on Random Walks and 1D Convolutions", "authors": [ "Florian Frantzen", - "Michael Schaub" + "Michael T Schaub" ], "abstract": "Triggered by limitations of graph-based deep learning methods in terms of computational expressivity and model flexibility, recent years have seen a surge of interest in computational models that operate on higher-order topological domains such as hypergraphs and simplicial complexes. While the increased expressivity of these models can indeed lead to a better classification performance and a more faithful representation of the underlying system, the computational cost of these higher-order models can increase dramatically. To this end, we here explore a simplicial complex neural network learning architecture based on random walks and fast 1D convolutions (SCRaWl), in which we can adjust the increase in computational cost by varying the length and number of random walks considered while accounting for higher-order relationships. Importantly, due to the random walk-based design, the expressivity of the proposed architecture is provably incomparable to that of existing message-passing simplicial neural networks. We empirically evaluate SCRaWl on real-world datasets and show that it outperforms other simplicial neural networks.", "type": "Poster", @@ -41881,7 +41881,7 @@ "title": "AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents", "authors": [ "Jake Grigsby", - "Jim Fan", + "Linxi Fan", "Yuke Zhu" ], "abstract": "We introduce AMAGO, an in-context Reinforcement Learning (RL) agent that uses sequence models to tackle the challenges of generalization, long-term memory, and meta-learning. Recent works have shown that off-policy learning can make in-context RL with recurrent policies viable. Nonetheless, these approaches require extensive tuning and limit scalability by creating key bottlenecks in agents' memory capacity, planning horizon, and model size. AMAGO revisits and redesigns the off-policy in-context approach to successfully train long-sequence Transformers over entire rollouts in parallel with end-to-end RL. Our agent is scalable and applicable to a wide range of problems, and we demonstrate its strong performance empirically in meta-RL and long-term memory domains. AMAGO's focus on sparse rewards and off-policy data also allows in-context learning to extend to goal-conditioned problems with challenging exploration. When combined with a multi-goal hindsight relabeling scheme, AMAGO can solve a previously difficult category of open-world domains, where agents complete many possible instructions in procedurally generated environments.", @@ -42050,7 +42050,7 @@ "id": 18974, "title": "Understanding Domain Generalization: A Noise Robustness Perspective", "authors": [ - "RUI QIAO", + "Rui Qiao", "Bryan Kian Hsiang Low" ], "abstract": "Despite the rapid development of machine learning algorithms for domain generalization (DG), there is no clear empirical evidence that the existing DG algorithms outperform the classic empirical risk minimization (ERM) across standard benchmarks. To better understand this phenomenon, we investigate whether there are benefits of DG algorithms over ERM through the lens of label noise.Specifically, our finite-sample analysis reveals that label noise exacerbates the effect of spurious correlations for ERM, undermining generalization. Conversely, we illustrate that DG algorithms exhibit implicit label-noise robustness during finite-sample training even when spurious correlation is present.Such desirable property helps mitigate spurious correlations and improve generalization in synthetic experiments. However, additional comprehensive experiments on real-world benchmark datasets indicate that label-noise robustness does not necessarily translate to better performance compared to ERM. We conjecture that the failure mode of ERM arising from spurious correlations may be less pronounced in practice. Our code is available at https://github.com/qiaoruiyt/NoiseRobustDG", @@ -42218,7 +42218,7 @@ }, { "id": 19096, - "title": "L2MAC: Large Language Model Automatic Computer for Unbounded Code Generation", + "title": "L2MAC: Large Language Model Automatic Computer for Extensive Code Generation", "authors": [ "Samuel Holt", "Max Ruiz Luyten", @@ -42253,9 +42253,9 @@ "id": 19140, "title": "The LLM Surgeon", "authors": [ - "Tycho van der Ouderaa", + "Tycho F. A. van der Ouderaa", "Markus Nagel", - "Mart van Baalen", + "Mart Van Baalen", "Tijmen Blankevoort" ], "abstract": "State-of-the-art language models are becoming increasingly large in an effort to achieve the highest performance on large corpora of available textual data. However, the sheer size of the Transformer architectures makes it difficult to deploy models within computational, environmental or device-specific constraints. We explore data-driven compression of existing pretrained models as an alternative to training smaller models from scratch. To do so, we scale Kronecker-factored curvature approximations of the target loss landscape to large language models. In doing so, we can compute both the dynamic allocation of structures that can be removed as well as updates of remaining weights that account for the removal. We provide a general framework for unstructured, semi-structured and structured pruning and improve upon weight updates to capture more correlations between weights, while remaining computationally efficient. Experimentally, our method can prune rows and columns from a range of OPT models and Llamav2-7B by 20\\%-30\\%, with a negligible loss in performance, and achieve state-of-the-art results in unstructured and semi-structured pruning of large language models. We will open source our code on GitHub upon acceptance.", @@ -42301,10 +42301,10 @@ }, { "id": 19222, - "title": "Learning Scalar Fields for Molecular Docking with Fast Fourier Transforms", + "title": "Equivariant Scalar Fields for Molecular Docking with Fast Fourier Transforms", "authors": [ "Bowen Jing", - "Tommi Jaakkola", + "Tommi S. Jaakkola", "Bonnie Berger" ], "abstract": "Molecular docking is critical to structure-based virtual screening, yet the throughput of such workflows is limited by the expensive optimization of scoring functions involved in most docking algorithms. We explore how machine learning can accelerate this process by learning a scoring function with a functional form that allows for more rapid optimization. Specifically, we define the scoring function to be the cross-correlation of multi-channel ligand and protein scalar fields parameterized by equivariant graph neural networks, enabling rapid optimization over rigid-body degrees of freedom with fast Fourier transforms. Moreover, the runtime of our approach can be amortized at several levels of abstraction, and is particularly favorable for virtual screening settings with a common binding pocket. We benchmark our scoring functions on two simplified docking-related tasks: decoy pose scoring and rigid conformer docking. Our method attains similar but faster performance on crystal structures compared to the Vina and Gnina scoring functions, and is more robust on computationally predicted structures.", @@ -42524,7 +42524,7 @@ }, { "id": 19426, - "title": "TEDDY: Trimming Edges with Degree-based Graph Diffusion Strategy", + "title": "TEDDY: Trimming Edges with Degree-based Discrimination Strategy", "authors": [ "Hyunjin Seo", "Jihun Yun", @@ -42593,7 +42593,7 @@ "title": "Why is SAM Robust to Label Noise?", "authors": [ "Christina Baek", - "Zico Kolter", + "J Zico Kolter", "Aditi Raghunathan" ], "abstract": "Sharpness-Aware Minimization (SAM) is most known for achieving state-of the-art performances on natural image and language tasks. However, its most pronounced improvements (of tens of percent) is rather in the presence of label noise. Understanding SAM's label noise robustness requires a departure from characterizing the robustness of minimas lying in ``flatter'' regions of the loss landscape. In particular, the peak performance occurs with early stopping, far before the loss converges. We decompose SAM's robustness into two effects: one induced by changes to the logit term and the other induced by changes to the network Jacobian. The first can be observed in linear logistic regression where SAM provably upweights the gradient contribution from clean examples. Although this explicit upweighting is also observable in neural networks, when we intervene and modify SAM to remove this effect, surprisingly, we see no visible degradation in performance. We infer that SAM's effect in deeper networks is instead explained entirely by the effect SAM has on the network Jacobian. We theoretically derive the explicit regularization induced by this Jacobian effect in two layer linear networks. Motivated by our analysis, we see that cheaper alternatives to SAM that explicitly induce these regularization effects largely recover the benefits even in deep networks trained on real-world datasets.", @@ -42726,7 +42726,7 @@ }, { "id": 17432, - "title": "Revisiting the Last-Iterative Convergence of Stochastic Gradient Methods", + "title": "Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods", "authors": [ "Zijian Liu", "Zhengyuan Zhou" @@ -42763,7 +42763,7 @@ "authors": [ "Ruqi Bai", "Saurabh Bagchi", - "David Inouye" + "David I. Inouye" ], "abstract": "While prior federated learning (FL) methods mainly consider client heterogeneity, we focus on the *Federated Domain Generalization (DG)* task, which introduces train-test heterogeneity in the FL context.Existing evaluations in this field are limited in terms of the scale of the clients and dataset diversity.Thus, we propose a Federated DG benchmark that aim to test the limits of current methods with high client heterogeneity, large numbers of clients, and diverse datasets. Towards this objective, we introduce a novel data partitioning method that allows us to distribute any domain dataset among few or many clients while controlling client heterogeneity. We then introduce and apply our methodology to evaluate $13$ Federated DG methods, which include centralized DG methods adapted to the FL context, FL methods that handle client heterogeneity, and methods designed specifically for Federated DG on $7$ datasets.Our results suggest that, despite some progress, significant performance gaps remain in Federated DG, especially when evaluating with a large number of clients, high client heterogeneity, or more realistic datasets. Furthermore, our extendable benchmark code will be publicly released to aid in benchmarking future Federated DG approaches.", "type": "Spotlight Poster", @@ -42796,7 +42796,7 @@ "title": "The Need for Speed: Pruning Transformers with One Recipe", "authors": [ "Samir Khaki", - "Konstantinos Plataniotis" + "Konstantinos N Plataniotis" ], "abstract": "We introduce the $\\textbf{O}$ne-shot $\\textbf{P}$runing $\\textbf{T}$echnique for $\\textbf{I}$nterchangeable $\\textbf{N}$etworks ($\\textbf{OPTIN}$) framework as a tool to increase the efficiency of pre-trained transformer architectures $\\textit{without requiring re-training}$. Recent works have explored improving transformer efficiency, however often incur computationally expensive re-training procedures or depend on architecture-specific characteristics, thus impeding practical wide-scale adoption. To address these shortcomings, the OPTIN framework leverages intermediate feature distillation, capturing the long-range dependencies of model parameters (coined $\\textit{trajectory}$), to produce state-of-the-art results on natural language, image classification, transfer learning, and semantic segmentation tasks $\\textit{without re-training}$. Given a FLOP constraint, the OPTIN framework will compress the network while maintaining competitive accuracy performance and improved throughput. Particularly, we show a $\\leq 2$% accuracy degradation from NLP baselines and a $0.5$% improvement from state-of-the-art methods on image classification at competitive FLOPs reductions. We further demonstrate the generalization of tasks and architecture with comparative performance using Mask2Former for semantic segmentation and cnn-style networks. OPTIN presents one of the first one-shot efficient frameworks for compressing transformer architectures that generalizes well across different class domains, in particular: natural language and image-related tasks, without $\\textit{re-training}$.", "type": "Poster", @@ -42832,13 +42832,13 @@ "title": "How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions", "authors": [ "Lorenzo Pacchiardi", - "Alex Chan", + "Alex James Chan", "S\u00f6ren Mindermann", "Ilan Moscovitz", - "Alexa Pan", + "Alexa Yue Pan", "Yarin Gal", "Owain Evans", - "Jan Brauner" + "Jan M. Brauner" ], "abstract": "Large language models (LLMs) can \u201clie\u201d, which we define as outputting false statements despite \u201cknowing\u201d the truth in a demonstrable sense. An example is an LLM instructed to spread misinformation. Here, we conduct an initial exploration into the feasibility of lie detection for LLMs. We develop a simple lie detector that requires neither access to the LLM\u2019s activations (black-box) nor ground-truth knowledge of the fact in question. The detector works by asking a predefined set of unrelated follow-up questions after a suspected lie, and feeding the LLM\u2019s yes/no answers into a logistic regression classifier. Despite its simplicity, this lie detector is highly accurate and surprisingly general. When trained on examples from a single setting\u2014prompting GPT-3.5 to lie about factual questions\u2014the detector generalises out-of-distribution to (1) other LLM architectures, (2) LLMs fine-tuned to lie, (3) sycophantic lies, and (4) lies emerging in real-life scenarios such as sales. These results indicate that LLMs have distinctive lie-related behavioural patterns, consistent across architectures and contexts, which could enable general-purpose lie detection", "type": "Poster", @@ -42856,9 +42856,9 @@ "Nate Gruver", "Anuroop Sriram", "Andrea Madotto", - "Andrew Wilson", - "Larry Zitnick", - "Zachary Ulissi" + "Andrew Gordon Wilson", + "C. Lawrence Zitnick", + "Zachary Ward Ulissi" ], "abstract": "Deep learning models have drastically accelerated materials discovery by accelerating predictive computational simulations like density functional theory (DFT). Large open computational materials databases such as the Materials Project or OQMD contain O($10^6$) known structures, and it is now straightforward to search those databases for materials with exciting properties. However, these databases are limited to experimentally known materials or candidates discovered in high-throughput computational campaigns. Many state-of-the-art engineering advances in solar photovaltaics, battery electrodes, and catalysts are made by discovering materials with outstanding properties that have not yet been discovered. Generative models are a natural solution to expand families of interest through sampling. While popular methods are typically constructed from variational autoencoders or diffusion models, we propose fine-tuning large language models for generation of stable materials. While unorthodox, fine-tuning large language models on text-encoded atomistic data is simple to implement yet reliable, with around 90\\% of sampled structures obeying physical constraints on atom positions and charges. Using energy of hull calculations from both learned ML potentials and gold-standard DFT calculations, we show that our strongest model (fine-tuned LLaMA-2 70B) can generate materials predicted to be metastable at about twice the rate (49\\% vs 28\\%) of CDVAE, a competing diffusion model. Because of text prompting's inherent flexibility, our models can simultaneously be used for unconditional generation of stable material, infilling of partial structures and text-conditional generation. Finally, we show that language models' ability to capture key symmetries of crystal structures improves with model scale, suggesting that the biases of pretrained LLMs are surprisingly well-suited for atomistic data.", "type": "Poster", @@ -42965,7 +42965,7 @@ }, { "id": 19434, - "title": "A Theoretical Explanation of Deep RL Performance in Stochastic Environments", + "title": "The Effective Horizon Explains Deep RL Performance in Stochastic Environments", "authors": [ "Cassidy Laidlaw", "Banghua Zhu", @@ -42986,8 +42986,8 @@ "title": "Weatherproofing Retrieval for Localization with Generative AI and Geometric Consistency", "authors": [ "Yannis Kalantidis", - "Mert Bulent SARIYILDIZ", - "Rafael Rezende", + "Mert B\u00fclent Sar\u0131y\u0131ld\u0131z", + "Rafael S. Rezende", "Philippe Weinzaepfel", "Diane Larlus", "Gabriela Csurka" @@ -43108,7 +43108,7 @@ "id": 19137, "title": "Constrained Decoding for Cross-lingual Label Projection", "authors": [ - "Duong Le", + "Duong Minh Le", "Yang Chen", "Alan Ritter", "Wei Xu" @@ -43128,14 +43128,14 @@ "authors": [ "Sam Toyer", "Olivia Watkins", - "Ethan Mendes", + "Ethan Adrian Mendes", "Justin Svegliato", "Luke Bailey", "Tiffany Wang", "Isaac Ong", "Karim Elmaaroufi", "Pieter Abbeel", - "trevor darrell", + "Trevor Darrell", "Alan Ritter", "Stuart Russell" ], @@ -43154,9 +43154,9 @@ "authors": [ "Yidong Wang", "Zhuohao Yu", + "Wenjin Yao", "Zhengran Zeng", "Linyi Yang", - "Wenjin Yao", "Cunxiang Wang", "Hao Chen", "Chaoya Jiang", @@ -43201,7 +43201,7 @@ "authors": [ "Tianze Luo", "Zhanfeng Mo", - "Sinno Pan" + "Sinno Jialin Pan" ], "abstract": "Graph Neural Networks are popular tools in graph representation learning that capture the graph structural properties. However, most GNNs employ single-resolution graph feature extraction, thereby failing to capture micro-level local patterns (high resolution) and macro-level graph cluster and community patterns (low resolution) simultaneously. Many multiresolution methods have been developed to capture graph patterns at multiple scales, but most of them depend on predefined and handcrafted multiresolution transforms that remain fixed throughout the training process once formulated. Due to variations in graph instances and distributions, fixed handcrafted transforms can not effectively tailor multiresolution representations to each graph instance. To acquire multiresolution representation suited to different graph instances and distributions, we introduce the Multiresolution Meta-Framelet-based Graph Convolutional Network (MM-FGCN), facilitating comprehensive and adaptive multiresolution analysis across diverse graphs. Extensive experiments demonstrate that our MM-FGCN achieves SOTA performance on various graph learning tasks.", "type": "Poster", @@ -43218,7 +43218,7 @@ "authors": [ "Yabo Zhang", "Yuxiang Wei", - "Dongsheng jiang", + "Dongsheng Jiang", "XIAOPENG ZHANG", "Wangmeng Zuo", "Qi Tian" @@ -43300,7 +43300,7 @@ }, { "id": 19418, - "title": "INViTE: INterpret and Control Vision Transformer with Text Explanations", + "title": "INViTE: INterpret and Control Vision-Language Models with Text Explanations", "authors": [ "Haozhe Chen", "Junfeng Yang", @@ -43408,7 +43408,7 @@ }, { "id": 18873, - "title": "Massive Editing for Large Language Model via Meta Learning", + "title": "Massive Editing for Large Language Models via Meta Learning", "authors": [ "Chenmien Tan", "Ge Zhang", @@ -43449,7 +43449,7 @@ "Xingjian Bai", "Klaus Kiendlhofer", "Charlie Griffin", - "Joar Skalse" + "Joar Max Viktor Skalse" ], "abstract": "Implementing a reward function that perfectly captures a complex task in the real world is impractical. As a result, it is often appropriate to think of the reward function as a *proxy* for the true objective rather than as its definition. We study this phenomenon through the lens of *Goodhart\u2019s law*, which predicts that increasing optimisation of an imperfect proxy beyond some critical point decreases performance on the true objective. First, we propose a way to *quantify* the magnitude of this effect and *show empirically* that optimising an imperfect proxy reward often leads to the behaviour predicted by Goodhart\u2019s law for a wide range of environments and reward functions. We then provide a *geometric explanation* for why Goodhart's law occurs in Markov decision processes. We use these theoretical insights to propose an *optimal early stopping method* that provably avoids the aforementioned pitfall and derive theoretical *regret bounds* for this method. Moreover, we derive a training method that maximises worst-case reward, for the setting where there is uncertainty about the true reward function. Finally, we evaluate our early stopping method experimentally. Our results support a foundation for a theoretically-principled study of reinforcement learning under reward misspecification.", "type": "Poster", @@ -43489,8 +43489,8 @@ "Meg Tong", "Maximilian Kaufmann", "Mikita Balesni", - "Asa Stickland", - "Tomek Korbak", + "Asa Cooper Stickland", + "Tomasz Korbak", "Owain Evans" ], "abstract": "We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form \"*A is B*\", it will not automatically generalize to the reverse direction \"*B is A*\". This is the **Reversal Curse**. For instance, if a model is trained on \"Olaf Scholz was the ninth Chancellor of Germany\", it will not automatically be able to answer the question, \"Who was the ninth Chancellor of Germany?\". Moreover, the likelihood of the correct answer (\"Olaf Scholz\") will not be higher than for a random name. Thus, models exhibit a basic failure of logical deduction and do not generalize a prevalent pattern in their training set (i.e. if \"*A is B*\" occurs, \"*B is A*\" is more likely to occur). We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as \"Uriah Hawthorne is the composer of *Abyssal Melodies*\" and showing that they fail to correctly answer \"Who composed *Abyssal Melodies?*\". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation. We also evaluate ChatGPT (GPT-3.5 and GPT-4) on questions about real-world celebrities, such as \"Who is Tom Cruise's mother? [A: Mary Lee Pfeiffer]\" and the reverse \"Who is Mary Lee Pfeiffer's son?\". GPT-4 correctly answers questions like the former 79% of the time, compared to 33% for the latter. This shows a failure of logical deduction that we hypothesize is caused by the Reversal Curse.", @@ -43546,7 +43546,7 @@ "Lev Proleev", "Diana Mincu", "Jilin Chen", - "Katherine Heller", + "Katherine A Heller", "Subhrajit Roy" ], "abstract": "Prompting and in-context learning (ICL) have become efficient learning paradigms for large language models (LLMs). However, LLMs suffer from prompt brittleness and various bias factors in the prompt, including but not limited to the formatting, the choice verbalizers, and the ICL examples. To address this problem that results in unexpected performance degradation, calibration methods have been developed to mitigate the effects of these biases while recovering LLM performance. In this work, we first conduct a systematic analysis of the existing calibration methods, where we both provide a unified view and reveal the failure cases. Inspired by these analyses, we propose Batch Calibration (BC), a simple yet intuitive method that controls the contextual bias from the batched input, unifies various prior approaches and effectively addresses the aforementioned issues. BC is zero-shot, inference-only, and incurs negligible additional costs. In the few-shot setup, we further extend BC to allow it to learn the contextual bias from labeled data. We validate the effectiveness of BC with PaLM 2-(S, M, L) and CLIP models and demonstrate state-of-the-art performance over previous calibration baselines across more than 10 natural language understanding and image classification tasks.", @@ -43562,7 +43562,7 @@ "id": 19408, "title": "Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models", "authors": [ - "Thomas Zollo", + "Thomas P Zollo", "Todd Morrill", "Zhun Deng", "Jake Snell", @@ -43623,9 +43623,9 @@ "id": 19405, "title": "In-Context Learning Dynamics with Random Binary Sequences", "authors": [ - "Eric Bigelow", + "Eric J Bigelow", "Ekdeep Singh Lubana", - "Robert Dick", + "Robert P. Dick", "Hidenori Tanaka", "Tomer Ullman" ], @@ -43643,7 +43643,7 @@ "title": "Understanding when Dynamics-Invariant Data Augmentations Benefit Model-free Reinforcement Learning Updates", "authors": [ "Nicholas Corrado", - "Josiah Hanna" + "Josiah P. Hanna" ], "abstract": "Recently, data augmentation (DA) has emerged as a method for leveraging domain knowledge to inexpensively generate additional data in reinforcement learning (RL) tasks, often yielding substantial improvements in data efficiency.While prior work has demonstrated the utility of incorporating augmented data directly into model-free RL updates,it is not well-understood when a particular DA strategy will improve data efficiency.In this paper, we seek to identify general aspects of DA responsible for observed learning improvements.Our study focuses on sparse-reward tasks with dynamics-invariant data augmentation functions, serving as an initial step towards a more general understanding of DA and its integration into RL training.Experimentally, we isolate three relevant aspects of DA: state-action coverage, reward density, and the number of augmented transitions generated per update (the augmented replay ratio).From our experiments, we draw two conclusions: (1) increasing state-action coverage often has a much greater impact on data efficiency than increasing reward density, and (2) decreasing the augmented replay ratio substantially improves data efficiency.In fact, certain tasks in our empirical study are solvable only when the replay ratio is sufficiently low.", "type": "Poster", @@ -43656,7 +43656,7 @@ }, { "id": 19496, - "title": "Improving Generalization in Equivariant Graph Neural Networks with Physical Inductive Biases", + "title": "SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases", "authors": [ "Yang Liu", "Jiashun Cheng", @@ -43719,7 +43719,7 @@ "id": 17836, "title": "Learning Robust Generalizable Radiance Field with Visibility and Feature Augmented Point Representation", "authors": [ - "Jiaxu Wang", + "WANG Jiaxu", "Ziyi Zhang", "Renjing Xu" ], @@ -43739,7 +43739,7 @@ "Mingyuan Sun", "Donghao Zhang", "Zongyuan Ge", - "Jiaxu Wang", + "WANG Jiaxu", "Jia Li", "Zheng Fang", "Renjing Xu" @@ -43760,7 +43760,7 @@ "title": "NOLA: Networks as Linear Combination of Low Rank Random Basis", "authors": [ "Soroush Abbasi Koohpayegani", - "K L Navaneet", + "Navaneet K L", "Parsa Nooralinejad", "Soheil Kolouri", "Hamed Pirsiavash" @@ -43781,7 +43781,7 @@ "Jing-Cheng Pang", "Pengyuan Wang", "Kaiyuan Li", - "XiongHui Chen", + "Xiong-Hui Chen", "Jiacheng Xu", "Zongzhang Zhang", "Yang Yu" @@ -43801,13 +43801,13 @@ "authors": [ "Herbie Bradley", "Andrew Dai", - "Hannah Teufel", + "Hannah Benita Teufel", "Jenny Zhang", "Koen Oostermeijer", "Marco Bellagente", "Jeff Clune", "Kenneth Stanley", - "G. Schott", + "Gregory Schott", "Joel Lehman" ], "abstract": "In many text-generation problems, users may prefer not only a single response, but a diverse range of high-quality outputs from which to choose. Quality-diversity (QD) search algorithms aim at such outcomes, by continually improving and diversifying a population of candidates. However, the applicability of QD to qualitative domains, like creative writing, has been limited by the difficulty of algorithmically specifying measures of quality and diversity. Interestingly, recent developments in language models (LMs) have enabled guiding search through \\emph{AI feedback}, wherein LMs are prompted in natural language to evaluate qualitative aspects of text. Leveraging this development, we introduce Quality-Diversity through AI Feedback (QDAIF), wherein an evolutionary algorithm applies LMs to both generate variation and evaluate the quality and diversity of candidate text. When assessed on creative writing domains, QDAIF covers more of a specified search space with high-quality samples than do non-QD controls. Further, human evaluation of QDAIF-generated creative texts validates reasonable agreement between AI and human evaluation. Our results thus highlight the potential of AI feedback to guide open-ended search for creative and original solutions, providing a recipe that seemingly generalizes to many domains and modalities. In this way, QDAIF is a step towards AI systems that can independently search, diversify, evaluate, and improve, which are among the core skills underlying human society's capacity for innovation.", @@ -43854,7 +43854,7 @@ "Xinchi Qiu", "Yan Gao", "Hongxiang Fan", - "Nic Lane" + "Nicholas Donald Lane" ], "abstract": "Pretrained large language models (LLMs) have emerged as a cornerstone in modern natural language processing, with their utility expanding to various applications and languages. However, the fine-tuning of multilingual LLMs, particularly for low-resource languages, is fraught with challenges steming from data-sharing restrictions (the physical border) and from the inherent linguistic differences (the linguistic border). These barriers hinder users of various languages, especially those in low-resource regions, from fully benefiting from the advantages of LLMs.To overcome these challenges, we propose the Federated Prompt Tuning Paradigm for Multilingual Scenarios, which leverages parameter-efficient fine-tuning in a manner that preserves user privacy. We have designed a comprehensive set of experiments and introduced the concept of \"language distance\" to highlight the several strengths of this paradigm. Even under computational constraints, our method not only bolsters data efficiency but also facilitates mutual enhancements across languages, particularly benefiting low-resource ones. Compared to traditional local crosslingual transfer tuning methods, our approach achieves a 6.9\\% higher accuracy, reduces the training parameters by over 99\\%, and demonstrates stronger cross-lingual generalization. Such findings underscore the potential of our approach to promote social equality, ensure user privacy, and champion linguistic diversity.", "type": "Poster", @@ -43867,12 +43867,12 @@ }, { "id": 19082, - "title": "Illusory Attacks: Detectability Matters in Adversarial Attacks on Sequential Decision-Makers", + "title": "Illusory Attacks: Information-theoretic detectability matters in adversarial attacks", "authors": [ "Tim Franzmeyer", - "Stephen McAleer", + "Stephen Marcus McAleer", "Joao F. Henriques", - "Jakob Foerster", + "Jakob Nicolaus Foerster", "Philip Torr", "Adel Bibi", "Christian Schroeder de Witt" @@ -43891,8 +43891,8 @@ "title": "Beyond Vanilla Variational Autoencoders: Detecting Posterior Collapse in Conditional and Hierarchical Variational Autoencoders", "authors": [ "Hien Dang", - "Tho-Huu Tran", - "Tan Nguyen", + "Tho Tran Huu", + "Tan Minh Nguyen", "Nhat Ho" ], "abstract": "The posterior collapse phenomenon in variational autoencoder (VAE), where the variational posterior distribution closely matches the prior distribution, can hinder the quality of the learned latent variables. As a consequence of posterior collapse, the latent variables extracted by the encoder in VAE preserve less information from the input data and thus fail to produce meaningful representations as input to the reconstruction process in the decoder. While this phenomenon has been an actively addressed topic related to VAE performance, the theory for posterior collapse remains underdeveloped, especially beyond the standard VAE. In this work, we advance the theoretical understanding of posterior collapse to two important and prevalent yet less studied classes of VAE: conditional VAE and hierarchical VAE. Specifically, via a non-trivial theoretical analysis of linear conditional VAE and hierarchical VAE with two levels of latent, we prove that the cause of posterior collapses in these models includes the correlation between the input and output of the conditional VAE and the effect of learnable encoder variance in the hierarchical VAE. We empirically validate our theoretical findings for linear conditional and hierarchical VAE and demonstrate that these results are also predictive for non-linear cases with extensive experiments.", @@ -43906,7 +43906,7 @@ }, { "id": 18058, - "title": "Ultra-sparse network advantage in deep learning via Cannistraci-Hebb brain-inspired training with hyperbolic meta-deep community-layered epitopology", + "title": "Epitopological learning and Cannistraci-Hebb network shape intelligence brain-inspired theory for ultra-sparse advantage in deep learning", "authors": [ "Yingtao Zhang", "Jialin Zhao", @@ -43925,11 +43925,11 @@ }, { "id": 19778, - "title": "Cameras as Rays: Sparse-view Pose Estimation via Ray Diffusion", + "title": "Cameras as Rays: Pose Estimation via Ray Diffusion", "authors": [ - "Jason Zhang", + "Jason Y. Zhang", "Amy Lin", - "MONEISH KUMAR", + "Moneish Kumar", "Tzu-Hsuan Yang", "Deva Ramanan", "Shubham Tulsiani" @@ -43989,7 +43989,7 @@ "Jie Zhang", "Shuyang Sun", "Philip Torr", - "Bo ZHAO" + "Bo Zhao" ], "abstract": "Synthetic training data has gained prominence in numerous learning tasks and scenarios, offering advantages such as dataset augmentation, generalization evaluation, and privacy preservation. Despite these benefits, the efficiency of synthetic data generated by current methodologies remains inferior when training advanced deep models exclusively, limiting its practical utility. To address this challenge, we analyze the principles underlying training data synthesis for supervised learning and elucidate a principled theoretical framework from the distribution-matching perspective that explicates the mechanisms governing synthesis efficacy. Through extensive experiments, we demonstrate the effectiveness of our synthetic data across diverse image classification tasks, both as a replacement for and augmentation to real datasets, while also benefits challenging tasks such as out-of-distribution generalization and privacy preservation.", "type": "Poster", @@ -44002,7 +44002,7 @@ }, { "id": 18691, - "title": "Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion", + "title": "Copilot4D: Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion", "authors": [ "Lunjun Zhang", "Yuwen Xiong", @@ -44024,7 +44024,7 @@ "id": 18144, "title": "FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores", "authors": [ - "Dan Fu", + "Daniel Y Fu", "Hermann Kumbong", "Eric Nguyen", "Christopher Re" @@ -44042,12 +44042,12 @@ "id": 18327, "title": "R-MAE: Regions Meet Masked Autoencoders", "authors": [ - "Duy-Kien Nguyen", + "Duy Kien Nguyen", "Yanghao Li", "Vaibhav Aggarwal", "Martin R. Oswald", "Alexander Kirillov", - "Cees G Snoek", + "Cees G. M. Snoek", "Xinlei Chen" ], "abstract": "In this work, we explore regions as the visual analogue of words for self-supervised image representation learning. Inspired by Masked Autoencoding (MAE), a generative pre-training baseline, we propose masked region autoencoding to learn from groups of pixels or regions. Specifically, we design an architecture which efficiently addresses the one-to-many mapping between images and regions, while being highly effective especially with high-quality regions. When integrated with MAE, our approach (R-MAE) demonstrates consistent improvements across various pre-training datasets and downstream detection and segmentation benchmarks, with negligible computational overheads. Beyond the quantitative evaluation, our analysis indicates the models pre-trained with masked region autoencoding unlock the potential for interactive segmentation.", @@ -44121,7 +44121,7 @@ "Behzad Shayegh", "Yanshuai Cao", "Xiaodan Zhu", - "Jackie Cheung", + "Jackie CK Cheung", "Lili Mou" ], "abstract": "We investigate the unsupervised constituency parsing task, which organizes words and phrases of a sentence into a hierarchical structure without using linguistically annotated data. We observe that existing unsupervised parsers capture differing aspects of parsing structures, which can be leveraged to enhance unsupervised parsing performance.To this end, we propose a notion of ``tree averaging,'' based on which we further propose a novel ensemble method for unsupervised parsing.To improve inference efficiency, we further distill the ensemble knowledge into a student model; such an ensemble-then-distill process is an effective approach to mitigate the over-smoothing problem existing in common multi-teacher distilling methods.Experiments show that our method surpasses all previous approaches, consistently demonstrating its effectiveness and robustness across various runs, with different ensemble components, and under domain-shift conditions.", @@ -44158,7 +44158,7 @@ "Zilong Wang", "Hao Zhang", "Chun-Liang Li", - "Julian M Eisenschlos", + "Julian Martin Eisenschlos", "Vincent Perot", "Zifeng Wang", "Lesly Miculicich",