bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
302
2.02k
abstract
stringlengths
566
2.48k
title
stringlengths
16
179
authors
sequencelengths
1
76
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
24
upvotes
int64
-1
86
num_comments
int64
-1
10
n_authors
int64
-1
75
Models
sequencelengths
0
37
Datasets
sequencelengths
0
10
Spaces
sequencelengths
0
26
old_Models
sequencelengths
0
37
old_Datasets
sequencelengths
0
10
old_Spaces
sequencelengths
0
26
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=zogaeVpbaE
@inproceedings{ tan2024devbench, title={DevBench: A multimodal developmental benchmark for language learning}, author={Alvin Wei Ming Tan and Chunhua Yu and Bria Lorelle Long and Wanjing Anya Ma and Tonya Murray and Rebecca D. Silverman and Jason D Yeatman and Michael Frank}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=zogaeVpbaE} }
How (dis)similar are the learning trajectories of vision–language models and children? Recent modeling work has attempted to understand the gap between models’ and humans’ data efficiency by constructing models trained on less data, especially multimodal naturalistic data. However, such models are often evaluated on adult-level benchmarks, with limited breadth in language abilities tested, and without direct comparison to behavioral data. We introduce DevBench, a multimodal benchmark comprising seven language evaluation tasks spanning the domains of lexical, syntactic, and semantic ability, with behavioral data from both children and adults. We evaluate a set of vision–language models on these tasks, comparing models and humans on their response patterns, not their absolute performance. Across tasks, models exhibit variation in their closeness to human response patterns, and models that perform better on a task also more closely resemble human behavioral responses. We also examine the developmental trajectory of OpenCLIP over training, finding that greater training results in closer approximations to adult response patterns. DevBench thus provides a benchmark for comparing models to human language development. These comparisons highlight ways in which model and human language learning processes diverge, providing insight into entry points for improving language models.
DevBench: A multimodal developmental benchmark for language learning
[ "Alvin Wei Ming Tan", "Chunhua Yu", "Bria Lorelle Long", "Wanjing Anya Ma", "Tonya Murray", "Rebecca D. Silverman", "Jason D Yeatman", "Michael Frank" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.10215
[ "https://github.com/alvinwmtan/dev-bench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=zgSnSZ0Re6
@inproceedings{ zhu2024point, title={Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning}, author={Haoyi Zhu and Yating Wang and Di Huang and Weicai Ye and Wanli Ouyang and Tong He}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=zgSnSZ0Re6} }
In robot learning, the observation space is crucial due to the distinct characteristics of different modalities, which can potentially become a bottleneck alongside policy design. In this study, we explore the influence of various observation spaces on robot learning, focusing on three predominant modalities: RGB, RGB-D, and point cloud. We introduce OBSBench, a benchmark comprising two simulators and 125 tasks, along with standardized pipelines for various encoders and policy baselines. Extensive experiments on diverse contact-rich manipulation tasks reveal a notable trend: point cloud-based methods, even those with the simplest designs, frequently outperform their RGB and RGB-D counterparts. This trend persists in both scenarios: training from scratch and utilizing pre-training. Furthermore, our findings demonstrate that point cloud observations often yield better policy performance and significantly stronger generalization capabilities across various geometric and visual conditions. These outcomes suggest that the 3D point cloud is a valuable observation modality for intricate robotic tasks. We also suggest that incorporating both appearance and coordinate information can enhance the performance of point cloud methods. We hope our work provides valuable insights and guidance for designing more generalizable and robust robotic models.
Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning
[ "Haoyi Zhu", "Yating Wang", "Di Huang", "Weicai Ye", "Wanli Ouyang", "Tong He" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.02500
[ "https://github.com/haoyizhu/realrobot" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=zg8dpAGl1I
@inproceedings{ nikulin2024xlandminigrid, title={{XL}and-MiniGrid: Scalable Meta-Reinforcement Learning Environments in {JAX}}, author={Alexander Nikulin and Vladislav Kurenkov and Ilya Zisman and Artem Sergeevich Agarkov and Viacheslav Sinii and Sergey Kolesnikov}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=zg8dpAGl1I} }
Inspired by the diversity and depth of XLand and the simplicity and minimalism of MiniGrid, we present XLand-MiniGrid, a suite of tools and grid-world environments for meta-reinforcement learning research. Written in JAX, XLand-MiniGrid is designed to be highly scalable and can potentially run on GPU or TPU accelerators, democratizing large-scale experimentation with limited resources. Along with the environments, XLand-MiniGrid provides pre-sampled benchmarks with millions of unique tasks of varying difficulty and easy-to-use baselines that allow users to quickly start training adaptive agents. In addition, we have conducted a preliminary analysis of scaling and generalization, showing that our baselines are capable of reaching millions of steps per second during training and validating that the proposed benchmarks are challenging. XLand-MiniGrid is open-source and available at \url{https://github.com/corl-team/xland-minigrid}.
XLand-MiniGrid: Scalable Meta-Reinforcement Learning Environments in JAX
[ "Alexander Nikulin", "Vladislav Kurenkov", "Ilya Zisman", "Artem Sergeevich Agarkov", "Viacheslav Sinii", "Sergey Kolesnikov" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2312.12044
[ "https://github.com/corl-team/xland-minigrid" ]
https://huggingface.co/papers/2312.12044
4
4
1
6
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=zQ3qU0xWZ5
@inproceedings{ mathai2024kgym, title={kGym: A Platform and Dataset to Benchmark Large Language Models on Linux Kernel Crash Resolution}, author={Alex Mathai and Chenxi Huang and Petros Maniatis and Aleksandr Nogikh and Franjo Ivancic and Junfeng Yang and Baishakhi Ray}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=zQ3qU0xWZ5} }
Large Language Models (LLMs) are consistently improving at increasingly realistic software engineering (SE) tasks. In real-world software stacks, significant SE effort is spent developing foundational system software like the Linux kernel. Unlike application-level software, a systems codebase like Linux is multilingual (low-level C/Assembly/Bash/Rust); gigantic (>20 million lines); critical (impacting billions of devices worldwide), and highly concurrent (involving complex multi-threading). To evaluate if machine learning (ML) models are useful while developing such large-scale systems-level software, we introduce kGym (a platform) and kBench (a dataset). The kGym platform provides a SE environment for large-scale experiments on the Linux kernel, including compiling and running kernels in parallel across several virtual machines, detecting operations and crashes, inspecting logs, and querying and patching the code base. We use kGym to facilitate evaluation on kBench, a crash resolution benchmark drawn from real-world Linux kernel bugs. An example bug in kBench contains crashing stack traces, a bug-reproducer file, a developer-written fix, and other associated data. To understand current performance, we conduct baseline experiments by prompting LLMs to resolve Linux kernel crashes. Our initial evaluations reveal that the best performing LLM achieves 0.72\% and 5.38\% in the unassisted and assisted (i.e., buggy files disclosed to the model) settings, respectively. These results highlight the need for further research to enhance model performance in SE tasks. Improving performance on kBench requires models to master new learning skills, including understanding the cause of crashes and repairing faults, writing memory-safe and hardware-aware code, and understanding concurrency. As a result, this work opens up multiple avenues of research at the intersection of machine learning and systems software.
kGym: A Platform and Dataset to Benchmark Large Language Models on Linux Kernel Crash Resolution
[ "Alex Mathai", "Chenxi Huang", "Petros Maniatis", "Aleksandr Nogikh", "Franjo Ivancic", "Junfeng Yang", "Baishakhi Ray" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.02680
[ "" ]
https://huggingface.co/papers/2407.02680
0
0
0
7
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=zGfKPqunJG
@inproceedings{ lin2024e, title={\$E{\textasciicircum}3\$: Exploring Embodied Emotion Through A Large-Scale Egocentric Video Dataset}, author={Wang Lin and Yueying Feng and WenKang Han and Tao Jin and Zhou Zhao and Fei Wu and Chang Yao and Jingyuan Chen}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=zGfKPqunJG} }
Understanding human emotions is fundamental to enhancing human-computer interaction, especially for embodied agents that mimic human behavior. Traditional emotion analysis often takes a third-person perspective, limiting the ability of agents to interact naturally and empathetically. To address this gap, this paper presents $E^3$ for Exploring Embodied Emotion, the first massive first-person view video dataset. $E^3$ contains more than $50$ hours of video, capturing $8$ different emotion types in diverse scenarios and languages. The dataset features videos recorded by individuals in their daily lives, capturing a wide range of real-world emotions conveyed through visual, acoustic, and textual modalities. By leveraging this dataset, we define $4$ core benchmark tasks - emotion recognition, emotion classification, emotion localization, and emotion reasoning - supported by more than $80$k manually crafted annotations, providing a comprehensive resource for training and evaluating emotion analysis models. We further present Emotion-LlaMa, which complements visual modality with acoustic modality to enhance the understanding of emotion in first-person videos. The results of comparison experiments with a large number of baselines demonstrate the superiority of Emotion-LlaMa and set a new benchmark for embodied emotion analysis. We expect that $E^3$ can promote advances in multimodal understanding, robotics, and augmented reality, and provide a solid foundation for the development of more empathetic and context-aware embodied agents.
E^3: Exploring Embodied Emotion Through A Large-Scale Egocentric Video Dataset
[ "Wang Lin", "Yueying Feng", "WenKang Han", "Tao Jin", "Zhou Zhao", "Fei Wu", "Chang Yao", "Jingyuan Chen" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=z64azPC6Nl
@inproceedings{ zhang2024gtsinger, title={{GTS}inger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks}, author={Yu Zhang and Changhao Pan and Wenxiang Guo and Ruiqi Li and Zhiyuan Zhu and Jialei Wang and Wenhao Xu and Jingyu Lu and Zhiqing Hong and Chuxin Wang and Lichao Zhang and Jinzheng He and Ziyue Jiang and Yuxin Chen and Chen Yang and Jiecheng Zhou and Xinyu Cheng and Zhou Zhao}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=z64azPC6Nl} }
The scarcity of high-quality and multi-task singing datasets significantly hinders the development of diverse controllable and personalized singing tasks, as existing singing datasets suffer from low quality, limited diversity of languages and singers, absence of multi-technique information and realistic music scores, and poor task suitability. To tackle these problems, we present GTSinger, a large Global, multi-Technique, free-to-use, high-quality singing corpus with realistic music scores, designed for all singing tasks, along with its benchmarks. Particularly, (1) we collect 80.59 hours of high-quality singing voices, forming the largest recorded singing dataset; (2) 20 professional singers across nine widely spoken languages offer diverse timbres and styles; (3) we provide controlled comparison and phoneme-level annotations of six commonly used singing techniques, helping technique modeling and control; (4) GTSinger offers realistic music scores, assisting real-world musical composition; (5) singing voices are accompanied by manual phoneme-to-audio alignments, global style labels, and 16.16 hours of paired speech for various singing tasks. Moreover, to facilitate the use of GTSinger, we conduct four benchmark experiments: technique-controllable singing voice synthesis, technique recognition, style transfer, and speech-to-singing conversion.
GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks
[ "Yu Zhang", "Changhao Pan", "Wenxiang Guo", "Ruiqi Li", "Zhiyuan Zhu", "Jialei Wang", "Wenhao Xu", "Jingyu Lu", "Zhiqing Hong", "Chuxin Wang", "Lichao Zhang", "Jinzheng He", "Ziyue Jiang", "Yuxin Chen", "Chen Yang", "Jiecheng Zhou", "Xinyu Cheng", "Zhou Zhao" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2409.13832
[ "https://github.com/gtsinger/gtsinger" ]
https://huggingface.co/papers/2409.13832
0
0
0
18
[]
[ "GTSinger/GTSinger" ]
[]
[]
[ "GTSinger/GTSinger" ]
[]
1
null
https://openreview.net/forum?id=z1nITsHKb4
@inproceedings{ lyu2024mmscan, title={{MMS}can: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations}, author={Ruiyuan Lyu and Jingli Lin and Tai Wang and Shuai Yang and Xiaohan Mao and Yilun Chen and Runsen Xu and Haifeng Huang and Chenming Zhu and Dahua Lin and Jiangmiao Pang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=z1nITsHKb4} }
With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress. However, limited by existing datasets, previous works mainly focus on understanding object properties or inter-object spatial relationships in a 3D scene. To tackle this problem, this paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan. It is constructed based on a top-down logic, from region to object level, from a single target to inter-target relationships, covering holistic aspects of spatial and attribute understanding. The overall pipeline incorporates powerful VLMs via carefully designed prompts to initialize the annotations efficiently and further involve humans' correction in the loop to ensure the annotations are natural, correct, and comprehensive. Built upon existing 3D scanning data, the resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks. We evaluate representative baselines on our benchmarks, analyze their capabilities in different aspects, and showcase the key problems to be addressed in the future. Furthermore, we use this high-quality dataset to train state-of-the-art 3D visual grounding and LLMs and obtain remarkable performance improvement both on existing benchmarks and in-the-wild evaluation.
MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations
[ "Ruiyuan Lyu", "Jingli Lin", "Tai Wang", "Shuai Yang", "Xiaohan Mao", "Yilun Chen", "Runsen Xu", "Haifeng Huang", "Chenming Zhu", "Dahua Lin", "Jiangmiao Pang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09401
[ "https://github.com/openrobotlab/embodiedscan" ]
https://huggingface.co/papers/2406.09401
0
0
1
11
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=yjj8ele147
@inproceedings{ magnusson2024paloma, title={Paloma: A Benchmark for Evaluating Language Model Fit}, author={Ian Magnusson and Akshita Bhagia and Valentin Hofmann and Luca Soldaini and Ananya Harsh Jha and Oyvind Tafjord and Dustin Schwenk and Evan Pete Walsh and Yanai Elazar and Kyle Lo and Dirk Groeneveld and Iz Beltagy and Hannaneh Hajishirzi and Noah A. Smith and Kyle Richardson and Jesse Dodge}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=yjj8ele147} }
Evaluations of language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domains—varying distributions of language. We introduce Perplexity Analysis for Language Model Assessment (Paloma), a benchmark to measure LM fit to 546 English and code domains, instead of assuming perplexity on one distribution extrapolates to others. We include two new datasets of the top 100 subreddits (e.g., r/depression on Reddit) and programming languages (e.g., Java on GitHub), both sources common in contemporary LMs. With our benchmark, we release 6 baseline 1B LMs carefully controlled to provide fair comparisons about which pretraining corpus is best and code for others to apply those controls to their own experiments. Our case studies demonstrate how the fine-grained results from Paloma surface findings such as that models pretrained without data beyond Common Crawl exhibit anomalous gaps in LM fit to many domains or that loss is dominated by the most frequently occurring strings in the vocabulary.
Paloma: A Benchmark for Evaluating Language Model Fit
[ "Ian Magnusson", "Akshita Bhagia", "Valentin Hofmann", "Luca Soldaini", "Ananya Harsh Jha", "Oyvind Tafjord", "Dustin Schwenk", "Evan Pete Walsh", "Yanai Elazar", "Kyle Lo", "Dirk Groeneveld", "Iz Beltagy", "Hannaneh Hajishirzi", "Noah A. Smith", "Kyle Richardson", "Jesse Dodge" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2312.10523
[ "" ]
https://huggingface.co/papers/2312.10523
12
12
2
16
[]
[ "allenai/paloma" ]
[]
[]
[ "allenai/paloma" ]
[]
1
null
https://openreview.net/forum?id=yg4Tt2QeU7
@inproceedings{ xu2024bag, title={Bag of Tricks: Benchmarking of Jailbreak Attacks on {LLM}s}, author={Zhao Xu and Fan Liu and Hao Liu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=yg4Tt2QeU7} }
Although Large Language Models (LLMs) have demonstrated significant capabilities in executing complex tasks in a zero-shot manner, they are susceptible to jailbreak attacks and can be manipulated to produce harmful outputs. Recently, a growing body of research has categorized jailbreak attacks into token-level and prompt-level attacks. However, previous work primarily overlooks the diverse key factors of jailbreak attacks, with most studies concentrating on LLM vulnerabilities and lacking exploration of defense-enhanced LLMs. To address these issues, we introduced JailTrickBench to evaluate the impact of various attack settings on LLM performance and provide a baseline for jailbreak attacks, encouraging the adoption of a standardized evaluation framework. Specifically, we evaluate the eight key factors of implementing jailbreak attacks on LLMs from both target-level and attack-level perspectives. We further conduct seven representative jailbreak attacks on six defense methods across two widely used datasets, encompassing approximately 354 experiments with about 55,000 GPU hours on A800-80G. Our experimental results highlight the need for standardized benchmarking to evaluate these attacks on defense-enhanced LLMs. Our code is available at https://github.com/usail-hkust/JailTrickBench.
Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs
[ "Zhao Xu", "Fan Liu", "Hao Liu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09324
[ "https://github.com/usail-hkust/bag_of_tricks_for_llm_jailbreaking" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=yWMMKm81vZ
@inproceedings{ nguyen2024seafloorai, title={Seafloor{AI}: A Large-scale Vision-Language Dataset for Seafloor Geological Survey}, author={Kien X Nguyen and Fengchun Qiao and Arthur Trembanis and Xi Peng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=yWMMKm81vZ} }
A major obstacle to the advancements of machine learning models in marine science, particularly in sonar imagery analysis, is the scarcity of AI-ready datasets. While there have been efforts to make AI-ready sonar image dataset publicly available, they suffer from limitations in terms of environment setting and scale. To bridge this gap, we introduce $\texttt{SeafloorAI}$, the first extensive AI-ready datasets for seafloor mapping across 5 geological layers that is curated in collaboration with marine scientists. We further extend the dataset to $\texttt{SeafloorGenAI}$ by incorporating the language component in order to facilitate the development of both $\textit{vision}$- and $\textit{language}$-capable machine learning models for sonar imagery. The dataset consists of 62 geo-distributed data surveys spanning 17,300 square kilometers, with 696K sonar images, 827K annotated segmentation masks, 696K detailed language descriptions and approximately 7M question-answer pairs. By making our data processing source code publicly available, we aim to engage the marine science community to enrich the data pool and inspire the machine learning community to develop more robust models. This collaborative approach will enhance the capabilities and applications of our datasets within both fields.
SeafloorAI: A Large-scale Vision-Language Dataset for Seafloor Geological Survey
[ "Kien X Nguyen", "Fengchun Qiao", "Arthur Trembanis", "Xi Peng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.00172
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=yUEBXN3cvX
@inproceedings{ li2024on, title={On the Effects of Data Scale on {UI} Control Agents}, author={Wei Li and William E Bishop and Alice Li and Christopher Rawles and Folawiyo Campbell-Ajala and Divya Tyamagundlu and Oriana Riva}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=yUEBXN3cvX} }
Autonomous agents that control user interfaces to accomplish human tasks are emerging. Leveraging LLMs to power such agents has been of special interest, but unless fine-tuned on human-collected task demonstrations, performance is still relatively low. In this work we study whether fine-tuning alone is a viable approach for building real-world UI control agents. To this end we collect and release a new dataset, AndroidControl, consisting of 15,283 demonstrations of everyday tasks with Android apps. Compared to existing datasets, each AndroidControl task instance includes both high and low-level human-generated instructions, allowing us to explore the level of task complexity an agent can handle. Moreover, AndroidControl is the most diverse computer control dataset to date, including 14,548 unique tasks over 833 Android apps, thus allowing us to conduct in-depth analysis of the model performance in and out of the domain of the training data. Using the dataset, we find that when tested in domain fine-tuned models outperform zero and few-shot baselines and scale in such a way that robust performance might feasibly be obtained simply by collecting more data. Out of domain, performance scales significantly more slowly and suggests that in particular for high-level tasks, fine-tuning on more data alone may be insufficient for achieving robust out-of-domain performance.
On the Effects of Data Scale on UI Control Agents
[ "Wei Li", "William E Bishop", "Alice Li", "Christopher Rawles", "Folawiyo Campbell-Ajala", "Divya Tyamagundlu", "Oriana Riva" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.03679
[ "" ]
https://huggingface.co/papers/2406.03679
0
0
0
7
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=yS1dUkQFnu
@inproceedings{ xin2024vpetl, title={V-{PETL} Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark}, author={Yi Xin and Siqi Luo and Xuyang Liu and Yuntao Du. and Haodi Zhou and Xinyu Cheng and Christina Luoluo Lee and Junlong Du and Haozhe Wang and MingCai Chen and Ting Liu and Guimin Hu and Zhongwei Wan and Rongchao Zhang and Aoxue Li and Mingyang Yi and Xiaohong Liu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=yS1dUkQFnu} }
Parameter-efficient transfer learning (PETL) methods show promise in adapting a pre-trained model to various downstream tasks while training only a few parameters. In the computer vision (CV) domain, numerous PETL algorithms have been proposed, but their direct employment or comparison remains inconvenient. To address this challenge, we construct a Unified Visual PETL Benchmark (V-PETL Bench) for the CV domain by selecting 30 diverse, challenging, and comprehensive datasets from image recognition, video action recognition, and dense prediction tasks. On these datasets, we systematically evaluate 25 dominant PETL algorithms and open-source a modular and extensible codebase for fair evaluation of these algorithms. V-PETL Bench runs on NVIDIA A800 GPUs and requires approximately 310 GPU days. We release all the benchmark, making it more efficient and friendly to researchers. Additionally, V-PETL Bench will be continuously updated for new PETL algorithms and CV tasks.
V-PETL Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark
[ "Yi Xin", "Siqi Luo", "Xuyang Liu", "Yuntao Du.", "Haodi Zhou", "Xinyu Cheng", "Christina Luoluo Lee", "Junlong Du", "Haozhe Wang", "MingCai Chen", "Ting Liu", "Guimin Hu", "Zhongwei Wan", "Rongchao Zhang", "Aoxue Li", "Mingyang Yi", "Xiaohong Liu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=y10DM6R2r3
@inproceedings{ wang2024mmlupro, title={{MMLU}-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark}, author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=y10DM6R2r3} }
In the age of large-scale language models, benchmarks like the Massive Multitask Language Understanding (MMLU) have been pivotal in pushing the boundaries of what AI can achieve in language comprehension and reasoning across diverse domains. However, as models continue to improve, their performance on these benchmarks has begun to plateau, making it increasingly difficult to discern differences in model capabilities. This paper introduces MMLU-Pro, an enhanced dataset designed to extend the mostly knowledge-driven MMLU benchmark by integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. Additionally, MMLU-Pro eliminates part of the trivial and noisy questions in MMLU. Our experimental results show that MMLU-Pro not only raises the challenge, causing a significant drop in accuracy by 16\% to 33\% compared to MMLU, but also demonstrates greater stability under varying prompts. With 24 different prompt styles tested, the sensitivity of model scores to prompt variations decreased from 4-5\% in MMLU to just 2\% in MMLU-Pro. Additionally, we found that models utilizing Chain of Thought (CoT) reasoning achieved better performance on MMLU-Pro compared to direct answering, which is in stark contrast to the findings on the original MMLU, indicating that MMLU-Pro includes more complex reasoning questions. Our assessments confirm that MMLU-Pro is more discriminative benchmark to better track progress in the field.
MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark
[ "Yubo Wang", "Xueguang Ma", "Ge Zhang", "Yuansheng Ni", "Abhranil Chandra", "Shiguang Guo", "Weiming Ren", "Aaran Arulraj", "Xuan He", "Ziyan Jiang", "Tianle Li", "Max Ku", "Kai Wang", "Alex Zhuang", "Rongqi Fan", "Xiang Yue", "Wenhu Chen" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.01574
[ "" ]
https://huggingface.co/papers/2406.01574
14
43
3
17
[ "akjindal53244/Llama-3.1-Storm-8B", "akjindal53244/Llama-3.1-Storm-8B-GGUF", "ghost-x/ghost-8b-beta-1608", "ghost-x/ghost-8b-beta", "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic", "ghost-x/ghost-8b-beta-1608-gguf", "RichardErkhov/akjindal53244_-_Llama-3.1-Storm-8B-gguf", "QuantFactory/Llama-3.1-Storm-8B-GGUF", "unsloth/Llama-3.1-Storm-8B", "unsloth/Llama-3.1-Storm-8B-bnb-4bit", "EpistemeAI2/FireStorm-Llama-3.1-8B", "QuantFactory/FireStorm-Llama-3.1-8B-GGUF", "QuantFactory/ghost-8b-beta-GGUF", "QuantFactory/ghost-8b-beta-1608-GGUF", "FlorianJc/ghost-8b-beta-vllm-fp8", "RichardErkhov/ghost-x_-_ghost-8b-beta-gguf", "ghost-x/ghost-8b-beta-1608-awq", "RichardErkhov/ghost-x_-_ghost-8b-beta-1608-gguf", "ghost-x/ghost-8b-beta-1608-mlx", "RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf", "HoneyBadger2989/Llama-3.1-Storm-8B-GGUF", "osllmai-community/Llama-3.1-Storm-8B-bnb-4bit" ]
[ "TIGER-Lab/MMLU-Pro", "sam-paech/mmlu-pro-nomath-sml", "sam-paech/mmlu-pro-irt-1-0", "sam-paech/mmlu-pro-nomath", "sbintuitions/MMLU-Pro" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "open-llm-leaderboard/blog", "Weyaxi/leaderboard-results-to-modelcard", "LLM360/de-arena", "lamhieu/ghost-8b-beta-8k", "lamhieu/ghost-8b-beta-128k", "upstage/evalverse-space", "oceansweep/tldw", "sagar007/lama_storm_8b", "Granther/try-this-model", "ibm/benchbench", "NotASI/Llama-3.1-Storm-8B", "per/benchbench", "Darok/Featherless-Feud", "lamhieu/etherll-ghost-8b-beta-coder", "emekaboris/try-this-model", "modelcitizen/akjindal53244-Llama-3.1-Storm-8B", "ponomd420/akjindal53244-Llama-3.1-Storm-8B", "crystal99/akjindal53244-Llama-3.1-Storm-8B", "SC999/NV_Nemotron", "mahamed97/ghost-x-ghost-8b-beta-1608", "gjellerup/ghost-8b-beta-8k", "jcachat/Open_NotebookLM_TLDW", "aialliance/safetybat", "tsteffek/de-arena" ]
[ "akjindal53244/Llama-3.1-Storm-8B", "akjindal53244/Llama-3.1-Storm-8B-GGUF", "ghost-x/ghost-8b-beta-1608", "ghost-x/ghost-8b-beta", "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic", "ghost-x/ghost-8b-beta-1608-gguf", "RichardErkhov/akjindal53244_-_Llama-3.1-Storm-8B-gguf", "QuantFactory/Llama-3.1-Storm-8B-GGUF", "unsloth/Llama-3.1-Storm-8B", "unsloth/Llama-3.1-Storm-8B-bnb-4bit", "EpistemeAI2/FireStorm-Llama-3.1-8B", "QuantFactory/FireStorm-Llama-3.1-8B-GGUF", "QuantFactory/ghost-8b-beta-GGUF", "QuantFactory/ghost-8b-beta-1608-GGUF", "FlorianJc/ghost-8b-beta-vllm-fp8", "RichardErkhov/ghost-x_-_ghost-8b-beta-gguf", "ghost-x/ghost-8b-beta-1608-awq", "RichardErkhov/ghost-x_-_ghost-8b-beta-1608-gguf", "ghost-x/ghost-8b-beta-1608-mlx", "RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf", "HoneyBadger2989/Llama-3.1-Storm-8B-GGUF", "osllmai-community/Llama-3.1-Storm-8B-bnb-4bit" ]
[ "TIGER-Lab/MMLU-Pro", "sam-paech/mmlu-pro-nomath-sml", "sam-paech/mmlu-pro-irt-1-0", "sam-paech/mmlu-pro-nomath", "sbintuitions/MMLU-Pro" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "open-llm-leaderboard/blog", "Weyaxi/leaderboard-results-to-modelcard", "LLM360/de-arena", "lamhieu/ghost-8b-beta-8k", "lamhieu/ghost-8b-beta-128k", "upstage/evalverse-space", "oceansweep/tldw", "sagar007/lama_storm_8b", "Granther/try-this-model", "ibm/benchbench", "NotASI/Llama-3.1-Storm-8B", "per/benchbench", "Darok/Featherless-Feud", "lamhieu/etherll-ghost-8b-beta-coder", "emekaboris/try-this-model", "modelcitizen/akjindal53244-Llama-3.1-Storm-8B", "ponomd420/akjindal53244-Llama-3.1-Storm-8B", "crystal99/akjindal53244-Llama-3.1-Storm-8B", "SC999/NV_Nemotron", "mahamed97/ghost-x-ghost-8b-beta-1608", "gjellerup/ghost-8b-beta-8k", "jcachat/Open_NotebookLM_TLDW", "aialliance/safetybat", "tsteffek/de-arena" ]
1
null
https://openreview.net/forum?id=y09S5rdaWY
@inproceedings{ jia2024benchdrive, title={Bench2Drive: Towards Multi-Ability Benchmarking of Closed-Loop End-To-End Autonomous Driving}, author={Xiaosong Jia and Zhenjie Yang and Qifeng Li and Zhiyuan Zhang and Junchi Yan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=y09S5rdaWY} }
In an era marked by the rapid scaling of foundation models, autonomous driving technologies are approaching a transformative threshold where end-to-end autonomous driving (E2E-AD) emerges due to its potential of scaling up in the data-driven manner. However, existing E2E-AD methods are mostly evaluated under the open-loop log-replay manner with L2 errors and collision rate as metrics (e.g., in nuScenes), which could not fully reflect the driving performance of algorithms as recently acknowledged in the community. For those E2E-AD methods evaluated under the closed-loop protocol, they are tested in fixed routes (e.g., Town05Long and Longest6 in CARLA) with the driving score as metrics, which is known for high variance due to the unsmoothed metric function and large randomness in the long route. Besides, these methods usually collect their own data for training, which makes algorithm-level fair comparison infeasible. To fulfill the paramount need of comprehensive, realistic, and fair testing environments for Full Self-Driving (FSD), we present Bench2Drive, the first benchmark for evaluating E2E-AD systems' multiple abilities in a closed-loop manner. Bench2Drive's official training data consists of 2 million fully annotated frames, collected from 10000 short clips uniformly distributed under 44 interactive scenarios (cut-in, overtaking, detour, etc), 23 weathers (sunny, foggy, rainy, etc), and 12 towns (urban, village, university, etc) in CARLA v2. Its evaluation protocol requires E2E-AD models to pass 44 interactive scenarios under different locations and weathers which sums up to 220 routes and thus provides a comprehensive and disentangled assessment about their driving capability under different situations. We implement state-of-the-art E2E-AD models and evaluate them in Bench2Drive, providing insights regarding current status and future directions.
Bench2Drive: Towards Multi-Ability Benchmarking of Closed-Loop End-To-End Autonomous Driving
[ "Xiaosong Jia", "Zhenjie Yang", "Qifeng Li", "Zhiyuan Zhang", "Junchi Yan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.03877
[ "https://github.com/Thinklab-SJTU/Bench2Drive" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=xqpkzMfmQ5
@inproceedings{ evans2024data, title={Data curation via joint example selection further accelerates multimodal learning}, author={Talfan Evans and Nikhil Parthasarathy and Hamza Merzic and Olivier J Henaff}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=xqpkzMfmQ5} }
Data curation is an essential component of large-scale pretraining. In this work, we demonstrate that jointly prioritizing batches of data is more effective for learning than selecting examples independently. Multimodal contrastive objectives expose the dependencies between data and thus naturally yield criteria for measuring the joint learnability of a batch. We derive a simple and tractable algorithm for selecting such batches, which significantly accelerate training beyond individually-prioritized data points. As performance improves by selecting from large super-batches, we also leverage recent advances in model approximation to reduce the computational overhead of scoring. As a result, our approach—multimodal contrastive learning with joint example selection (JEST)—surpasses state-of-the-art pretraining methods with up to 13× fewer iterations and 10× less computation. Essential to the performance of JEST is the ability to steer the data selection process towards the distribution of smaller, well-curated datasets via pretrained reference models, exposing data curation as a new dimension for neural scaling laws.
Data curation via joint example selection further accelerates multimodal learning
[ "Talfan Evans", "Nikhil Parthasarathy", "Hamza Merzic", "Olivier J Henaff" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.17711
[ "" ]
https://huggingface.co/papers/2406.17711
0
3
2
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=xotfLEAF4u
@inproceedings{ kil2024mllmcompbench, title={{MLLM}-CompBench: A Comparative Reasoning Benchmark for Multimodal {LLM}s}, author={Jihyung Kil and Zheda Mai and Justin Lee and Arpita Chowdhury and Zihe Wang and Kerrie Cheng and Lemeng Wang and Ye Liu and Wei-Lun Chao}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=xotfLEAF4u} }
The ability to compare objects, scenes, or situations is crucial for effective decision-making and problem-solving in everyday life. For instance, comparing the freshness of apples enables better choices during grocery shopping, while comparing sofa designs helps optimize the aesthetics of our living space. Despite its significance, the comparative capability is largely unexplored in artificial general intelligence (AGI). In this paper, we introduce MLLM-CompBench, a benchmark designed to evaluate the comparative reasoning capability of multimodal large language models (MLLMs). MLLM-CompBench mines and pairs images through visually oriented questions covering eight dimensions of relative comparison: visual attribute, existence, state, emotion, temporality, spatiality, quantity, and quality. We curate a collection of around 40K image pairs using metadata from diverse vision datasets and CLIP similarity scores. These image pairs span a broad array of visual domains, including animals, fashion, sports, and both outdoor and indoor scenes. The questions are carefully crafted to discern relative characteristics between two images and are labeled by human annotators for accuracy and relevance. We use MLLM-CompBench to evaluate recent MLLMs, including GPT-4V(ision), Gemini-Pro, and LLaVA-1.6. Our results reveal notable shortcomings in their comparative abilities. We believe MLLM-CompBench not only sheds light on these limitations but also establishes a solid foundation for future enhancements in the comparative capability of MLLMs.
MLLM-CompBench: A Comparative Reasoning Benchmark for Multimodal LLMs
[ "Jihyung Kil", "Zheda Mai", "Justin Lee", "Arpita Chowdhury", "Zihe Wang", "Kerrie Cheng", "Lemeng Wang", "Ye Liu", "Wei-Lun Chao" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=xlKeMuyoZ5
@inproceedings{ hauser2024large, title={Large Language Models' Expert-level Global History Knowledge Benchmark (Hi{ST}-{LLM})}, author={Jakob Hauser and D{\'a}niel Kondor and Jenny Reddish and Majid Benam and Enrico Cioni and Federica Villa and James S Bennett and Daniel Hoyer and Pieter Francois and Peter Turchin and R. Maria del Rio-Chanona}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=xlKeMuyoZ5} }
Large Language Models (LLMs) have the potential to transform humanities and social science research, yet their history knowledge and comprehension at a graduate level remains untested. Benchmarking LLMs in history is particularly challenging, given that human knowledge of history is inherently unbalanced, with more information available on Western history and recent periods. We introduce the History Seshat Test for LLMs (Hist-LLM), based on a subset of the Seshat Global History Databank, which provides a structured representation of human historical knowledge, containing 36,000 data points across 600 historical societies and over 2,700 scholarly references. This dataset covers every major world region from the Neolithic period to the Industrial Revolution and includes information reviewed and assembled by history experts and graduate research assistants. Using this dataset, we benchmark a total of seven models from the Gemini, OpenAI, and Llama families. We find that, in a four-choice format, LLMs have a balanced accuracy ranging from 33.6% (Llama-3.1-8B) to 46% (GPT-4-Turbo), outperforming random guessing (25%) but falling short of expert comprehension. LLMs perform better on earlier historical periods. Regionally, performance is more even but still better for the Americas and lowest in Oceania and Sub-Saharan Africa for the more advanced models. Our benchmark shows that while LLMs possess some expert-level historical knowledge, there is considerable room for improvement.
Large Language Models' Expert-level Global History Knowledge Benchmark (HiST-LLM)
[ "Jakob Hauser", "Dániel Kondor", "Jenny Reddish", "Majid Benam", "Enrico Cioni", "Federica Villa", "James S Bennett", "Daniel Hoyer", "Pieter Francois", "Peter Turchin", "R. Maria del Rio-Chanona" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=xkljKdGe4E
@inproceedings{ luo2024classic, title={Classic {GNN}s are Strong Baselines: Reassessing {GNN}s for Node Classification}, author={Yuankai Luo and Lei Shi and Xiao-Ming Wu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=xkljKdGe4E} }
Graph Transformers (GTs) have recently emerged as popular alternatives to traditional message-passing Graph Neural Networks (GNNs), due to their theoretically superior expressiveness and impressive performance reported on standard node classification benchmarks, often significantly outperforming GNNs. In this paper, we conduct a thorough empirical analysis to reevaluate the performance of three classic GNN models (GCN, GAT, and GraphSAGE) against GTs. Our findings suggest that the previously reported superiority of GTs may have been overstated due to suboptimal hyperparameter configurations in GNNs. Remarkably, with slight hyperparameter tuning, these classic GNN models achieve state-of-the-art performance, matching or even exceeding that of recent GTs across 17 out of the 18 diverse datasets examined. Additionally, we conduct detailed ablation studies to investigate the influence of various GNN configurations—such as normalization, dropout, residual connections, and network depth—on node classification performance. Our study aims to promote a higher standard of empirical rigor in the field of graph machine learning, encouraging more accurate comparisons and evaluations of model capabilities. Our implementation is available at https://github.com/LUOyk1999/tunedGNN.
Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification
[ "Yuankai Luo", "Lei Shi", "Xiao-Ming Wu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.08993
[ "https://github.com/LUOyk1999/tunedGNN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=xjxqWYyTfR
@inproceedings{ alberts2024unraveling, title={Unraveling Molecular Structure: A Multimodal Spectroscopic Dataset for Chemistry}, author={Marvin Alberts and Oliver Schilter and Federico Zipoli and Nina Hartrampf and Teodoro Laino}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=xjxqWYyTfR} }
Spectroscopic techniques are essential tools for determining the structure of molecules. Different spectroscopic techniques, such as Nuclear magnetic resonance (NMR), Infrared spectroscopy, and Mass Spectrometry, provide insight into the molecular structure, including the presence or absence of functional groups. Chemists leverage the complementary nature of the different methods to their advantage. However, the lack of a comprehensive multimodal dataset, containing spectra from a variety of spectroscopic techniques, has limited machine-learning approaches mostly to single-modality tasks for predicting molecular structures from spectra. Here we introduce a dataset comprising simulated $^1$H-NMR, $^{13}$C-NMR, HSQC-NMR, Infrared, and Mass spectra (positive and negative ion modes) for 790k molecules extracted from chemical reactions in patent data. This dataset enables the development of foundation models for integrating information from multiple spectroscopic modalities, emulating the approach employed by human experts. Additionally, we provide benchmarks for evaluating single-modality tasks such as structure elucidation, predicting the spectra for a target molecule, and functional group predictions. This dataset has the potential automate structure elucidation, streamlining the molecular discovery pipeline from synthesis to structure determination. The dataset and code for the benchmarks can be found at https://rxn4chemistry.github.io/multimodal-spectroscopic-dataset (Available upon submission of the supporting information).
Unraveling Molecular Structure: A Multimodal Spectroscopic Dataset for Chemistry
[ "Marvin Alberts", "Oliver Schilter", "Federico Zipoli", "Nina Hartrampf", "Teodoro Laino" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.17492
[ "https://github.com/rxn4chemistry/multimodal-spectroscopic-dataset" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=xepxnDQoGq
@inproceedings{ hua2024reactzyme, title={ReactZyme: A Benchmark for Enzyme-Reaction Prediction}, author={Chenqing Hua and Bozitao Zhong and Sitao Luan and Liang Hong and Guy Wolf and Doina Precup and Shuangjia Zheng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=xepxnDQoGq} }
Enzymes, with their specific catalyzed reactions, are necessary for all aspects of life, enabling diverse biological processes and adaptations. Predicting enzyme functions is essential for understanding biological pathways, guiding drug development, enhancing bioproduct yields, and facilitating evolutionary studies. Addressing the inherent complexities, we introduce a new approach to annotating enzymes based on their catalyzed reactions. This method provides detailed insights into specific reactions and is adaptable to newly discovered reactions, diverging from traditional classifications by protein family or expert-derived reaction classes. We employ machine learning algorithms to analyze enzyme reaction datasets, delivering a much more refined view on the functionality of enzymes. Our evaluation leverages the largest enzyme-reaction dataset to date, derived from the SwissProt and Rhea databases with entries up to January 8, 2024. We frame the enzyme-reaction prediction as a retrieval problem, aiming to rank enzymes by their catalytic ability for specific reactions. With our model, we can recruit proteins for novel reactions and predict reactions in novel proteins, facilitating enzyme discovery and function annotation https://github.com/WillHua127/ReactZyme.
ReactZyme: A Benchmark for Enzyme-Reaction Prediction
[ "Chenqing Hua", "Bozitao Zhong", "Sitao Luan", "Liang Hong", "Guy Wolf", "Doina Precup", "Shuangjia Zheng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2408.13659
[ "https://github.com/willhua127/reactzyme" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=xT5pmUju8W
@inproceedings{ lee2024a, title={A Benchmark Suite for Evaluating Neural Mutual Information Estimators on Unstructured Datasets}, author={Kyungeun Lee and Wonjong Rhee}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=xT5pmUju8W} }
Mutual Information (MI) is a fundamental metric for quantifying dependency between two random variables. When we can access only the samples, but not the underlying distribution functions, we can evaluate MI using sample-based estimators. Assessment of such MI estimators, however, has almost always relied on analytical datasets including Gaussian multivariates. Such datasets allow analytical calculations of the true MI values, but they are limited in that they do not reflect the complexities of real-world datasets. This study introduces a comprehensive benchmark suite for evaluating neural MI estimators on unstructured datasets, specifically focusing on images and texts. By leveraging same-class sampling for positive pairing and introducing a binary symmetric channel trick, we show that we can accurately manipulate true MI values of real-world datasets. Using the benchmark suite, we investigate seven challenging scenarios, shedding light on the reliability of neural MI estimators for unstructured datasets.
A Benchmark Suite for Evaluating Neural Mutual Information Estimators on Unstructured Datasets
[ "Kyungeun Lee", "Wonjong Rhee" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.10924
[ "https://github.com/kyungeun-lee/mibenchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=x8RgF2xQTj
@inproceedings{ mucs{\'a}nyi2024benchmarking, title={Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks}, author={B{\'a}lint Mucs{\'a}nyi and Michael Kirchhof and Seong Joon Oh}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=x8RgF2xQTj} }
Uncertainty quantification, once a singular task, has evolved into a spectrum of tasks, including abstained prediction, out-of-distribution detection, and aleatoric uncertainty quantification. The latest goal is disentanglement: the construction of multiple estimators that are each tailored to one and only one source of uncertainty. This paper presents the first benchmark of uncertainty disentanglement. We reimplement and evaluate a comprehensive range of uncertainty estimators, from Bayesian over evidential to deterministic ones, across a diverse range of uncertainty tasks on ImageNet. We find that, despite recent theoretical endeavors, no existing approach provides pairs of disentangled uncertainty estimators in practice. We further find that specialized uncertainty tasks are harder than predictive uncertainty tasks, where we observe saturating performance. Our results provide both practical advice for which uncertainty estimators to use for which specific task, and reveal opportunities for future research toward task-centric and disentangled uncertainties. All our reimplementations and weights and biases logs are available at https://github.com/bmucsanyi/untangle.
Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks
[ "Bálint Mucsányi", "Michael Kirchhof", "Seong Joon Oh" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2402.19460
[ "https://github.com/bmucsanyi/bud" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=wqo6xEMyk9
@inproceedings{ zi2024prog, title={ProG: A Graph Prompt Learning Benchmark}, author={Chenyi Zi and Haihong Zhao and Xiangguo Sun and Yiqing Lin and Hong Cheng and Jia Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=wqo6xEMyk9} }
Artificial general intelligence on graphs has shown significant advancements across various applications, yet the traditional `Pre-train \& Fine-tune' paradigm faces inefficiencies and negative transfer issues, particularly in complex and few-shot settings. Graph prompt learning emerges as a promising alternative, leveraging lightweight prompts to manipulate data and fill the task gap by reformulating downstream tasks to the pretext. However, several critical challenges still remain: how to unify diverse graph prompt models, how to evaluate the quality of graph prompts, and to improve their usability for practical comparisons and selection. In response to these challenges, we introduce the first comprehensive benchmark for graph prompt learning. Our benchmark integrates **SIX** pre-training methods and **FIVE** state-of-the-art graph prompt techniques, evaluated across **FIFTEEN** diverse datasets to assess performance, flexibility, and efficiency. We also present 'ProG', an easy-to-use open-source library that streamlines the execution of various graph prompt models, facilitating objective evaluations. Additionally, we propose a unified framework that categorizes existing graph prompt methods into two main approaches: prompts as graphs and prompts as tokens. This framework enhances the applicability and comparison of graph prompt techniques. The code is available at: https://github.com/sheldonresearch/ProG.
ProG: A Graph Prompt Learning Benchmark
[ "Chenyi Zi", "Haihong Zhao", "Xiangguo Sun", "Yiqing Lin", "Hong Cheng", "Jia Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.05346
[ "https://github.com/sheldonresearch/ProG" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=wmO7z57wNK
@inproceedings{ yang2024llmcbench, title={{LLMCB}ench: Benchmarking Large Language Model Compression for Efficient Deployment}, author={Ge Yang and Changyi He and Jinyang Guo and Jianyu Wu and Yifu Ding and Aishan Liu and Haotong Qin and Pengliang Ji and Xianglong Liu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=wmO7z57wNK} }
Although large language models (LLMs) have demonstrated their strong intelligence ability, the high demand for computation and storage hinders their practical application. To this end, many model compression techniques are proposed to increase the efficiency of LLMs. However, current researches only validate their methods on limited models, datasets, metrics, etc, and still lack a comprehensive evaluation under more general scenarios. So it is still a question of which model compression approach we should use under a specific case. To mitigate this gap, we present the Large Language Model Compression Benchmark (LLMCBench), a rigorously designed benchmark with an in-depth analysis for LLM compression algorithms. We first analyze the actual model production requirements and carefully design evaluation tracks and metrics. Then, we conduct extensive experiments and comparison using multiple mainstream LLM compression approaches. Finally, we perform an in-depth analysis based on the evaluation and provide useful insight for LLM compression design. We hope our LLMCBench can contribute insightful suggestions for LLM compression algorithm design and serve as a foundation for future research.
LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment
[ "Ge Yang", "Changyi He", "Jinyang Guo", "Jianyu Wu", "Yifu Ding", "Aishan Liu", "Haotong Qin", "Pengliang Ji", "Xianglong Liu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2410.21352
[ "https://github.com/aboveparadise/llmcbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=wjHVmgBDzc
@inproceedings{ su2024textttconflictbank, title={\${\textbackslash}texttt\{ConflictBank\}\$: A Benchmark for Evaluating the Influence of Knowledge Conflicts in {LLM}s}, author={Zhaochen Su and Jun Zhang and Xiaoye Qu and Tong Zhu and Yanshu Li and Jiashuo Sun and Juntao Li and Min Zhang and Yu Cheng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=wjHVmgBDzc} }
Large language models (LLMs) have achieved impressive advancements across numerous disciplines, yet the critical issue of knowledge conflicts, a major source of hallucinations, has rarely been studied. While a few research explored the conflicts between the inherent knowledge of LLMs and the retrieved contextual knowledge, a comprehensive assessment of knowledge conflict in LLMs is still missing. Motivated by this research gap, we firstly propose ConflictBank, the largest benchmark with 7.45M claim-evidence pairs and 553k QA pairs, addressing conflicts from misinformation, temporal discrepancies, and semantic divergences. Using ConflictBank, we conduct the thorough and controlled experiments for a comprehensive understanding of LLM behavior in knowledge conflicts, focusing on three key aspects: (i) conflicts encountered in retrieved knowledge, (ii) conflicts within the models' encoded knowledge, and (iii) the interplay between these conflict forms. Our investigation delves into four model families and twelve LLM instances and provides insights into conflict types, model sizes, and the impact at different stages. We believe that knowledge conflicts represent a critical bottleneck to achieving trustworthy artificial intelligence and hope our work will offer valuable guidance for future model training and development. Resources are available at https://github.com/zhaochen0110/conflictbank.
: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLMs
[ "Zhaochen Su", "Jun Zhang", "Xiaoye Qu", "Tong Zhu", "Yanshu Li", "Jiashuo Sun", "Juntao Li", "Min Zhang", "Yu Cheng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=wSCfRAAr69
@inproceedings{ zhang2024language, title={Language Without Borders: A Dataset and Benchmark for Code-Switching Lip Reading}, author={Xueyi Zhang and Chengwei Zhang and Mingrui Lao and Peng Zhao and Jun Tang and Yanming Guo and Siqi Cai and Xianghu Yue and Haizhou Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=wSCfRAAr69} }
Lip reading aims at transforming the videos of continuous lip movement into textual contents, and has achieved significant progress over the past decade. It serves as a critical yet practical assistance for speech-impaired individuals, with more practicability than speech recognition in noisy environments. With the increasing interpersonal communications in social media owing to globalization, the existing monolingual datasets for lip reading may not be sufficient to meet the exponential proliferation of bilingual and even multilingual users. However, to our best knowledge, research on code-switching is only explored in speech recognition, while the attempts in lip reading are seriously neglected. To bridge this gap, we have collected a bilingual code-switching lip reading benchmark composed of Chinese and English, dubbed CSLR. As the pioneering work, we recruited 62 speakers with proficient foundations in both spoken Chinese and English to express sentences containing both involved languages. Through rigorous criteria in data selection, CSLR benchmark has accumulated 85,560 video samples with a resolution of 1080x1920, totaling over 71.3 hours of high-quality code-switching lip movement data. To systematically evaluate the technical challenges in CSLR, we implement commonly-used lip reading backbones, as well as competitive solutions in code-switching speech for benchmark testing. Experiments show CSLR to be a challenging and under-explored lip reading task. We hope our proposed benchmark will extend the applicability of code-switching lip reading, and further contribute to the communities of cross-lingual communication and collaboration. Our dataset and benchmark are accessible at https://github.com/cslr-lipreading/CSLR.
Language Without Borders: A Dataset and Benchmark for Code-Switching Lip Reading
[ "Xueyi Zhang", "Chengwei Zhang", "Mingrui Lao", "Peng Zhao", "Jun Tang", "Yanming Guo", "Siqi Cai", "Xianghu Yue", "Haizhou Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=wOmtZ5FgMH
@inproceedings{ jin2024rwku, title={{RWKU}: Benchmarking Real-World Knowledge Unlearning for Large Language Models}, author={Zhuoran Jin and Pengfei Cao and Chenhao Wang and Zhitao He and Hongbang Yuan and Jiachun Li and Yubo Chen and Kang Liu and Jun Zhao}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=wOmtZ5FgMH} }
Large language models (LLMs) inevitably memorize sensitive, copyrighted, and harmful knowledge from the training corpus; therefore, it is crucial to erase this knowledge from the models. Machine unlearning is a promising solution for efficiently removing specific knowledge by post hoc modifying models. In this paper, we propose a Real-World Knowledge Unlearning benchmark (RWKU) for LLM unlearning. RWKU is designed based on the following three key factors: (1) For the task setting, we consider a more practical and challenging unlearning setting, where neither the forget corpus nor the retain corpus is accessible. (2) For the knowledge source, we choose 200 real-world famous people as the unlearning targets and show that such popular knowledge is widely present in various LLMs. (3) For the evaluation framework, we design the forget set and the retain set to evaluate the model’s capabilities across various real-world applications. Regarding the forget set, we provide four four membership inference attack (MIA) methods and nine kinds of adversarial attack probes to rigorously test unlearning efficacy. Regarding the retain set, we assess locality and utility in terms of neighbor perturbation, general ability, reasoning ability, truthfulness, factuality, and fluency. We conduct extensive experiments across two unlearning scenarios, two models and six baseline methods and obtain some meaningful findings. We release our benchmark and code publicly at http://rwku-bench.github.io for future work.
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
[ "Zhuoran Jin", "Pengfei Cao", "Chenhao Wang", "Zhitao He", "Hongbang Yuan", "Jiachun Li", "Yubo Chen", "Kang Liu", "Jun Zhao" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.10890
[ "https://github.com/jinzhuoran/rwku" ]
https://huggingface.co/papers/2406.10890
1
1
0
9
[]
[ "jinzhuoran/RWKU" ]
[]
[]
[ "jinzhuoran/RWKU" ]
[]
1
null
https://openreview.net/forum?id=w90ZH5v34S
@inproceedings{ zhang2024humor, title={Humor in {AI}: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning}, author={Jifan Zhang and Lalit K Jain and Yang Guo and Jiayi Chen and Kuan Lok Zhou and Siddharth Suresh and Andrew Wagenmaker and Scott Sievert and Timothy T. Rogers and Kevin Jamieson and Bob Mankoff and Robert D Nowak}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=w90ZH5v34S} }
We present a novel multimodal preference dataset for creative tasks, consisting of over 250 million human votes on more than 2.2 million captions, collected through crowdsourcing rating data for The New Yorker's weekly cartoon caption contest over the past eight years. This unique dataset supports the development and evaluation of multimodal large language models and preference-based fine-tuning algorithms for humorous caption generation. We propose novel benchmarks for judging the quality of model-generated captions, utilizing both GPT4 and human judgments to establish ranking-based evaluation strategies. Our experimental results highlight the limitations of current fine-tuning methods, such as RLHF and DPO, when applied to creative tasks. Furthermore, we demonstrate that even state-of-the-art models like GPT4 and Claude currently underperform top human contestants in generating humorous captions. As we conclude this extensive data collection effort, we release the entire preference dataset to the research community, fostering further advancements in AI humor generation and evaluation.
Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
[ "Jifan Zhang", "Lalit K Jain", "Yang Guo", "Jiayi Chen", "Kuan Lok Zhou", "Siddharth Suresh", "Andrew Wagenmaker", "Scott Sievert", "Timothy T. Rogers", "Kevin Jamieson", "Bob Mankoff", "Robert D Nowak" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.10522
[ "https://github.com/yguooo/cartoon-caption-generation" ]
https://huggingface.co/papers/2406.10522
1
7
2
12
[]
[ "yguooo/newyorker_caption_ranking" ]
[]
[]
[ "yguooo/newyorker_caption_ranking" ]
[]
1
null
https://openreview.net/forum?id=w5jfyvsRq3
@inproceedings{ duncan2024fit, title={Fit for our purpose, not yours: Benchmark for a low-resource, Indigenous language}, author={Suzanne Duncan and Gianna Leoni and Lee Steven and Keoni Mahelona and Peter-Lucas Jones}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=w5jfyvsRq3} }
Influential and popular benchmarks in AI are largely irrelevant to developing NLP tools for low-resource, Indigenous languages. With the primary goal of measuring the performance of general-purpose AI systems, these benchmarks fail to give due consideration and care to individual language communities, especially low-resource languages. The datasets contain numerous grammatical and orthographic errors, poor pronunciation, limited vocabulary, and the content lacks cultural relevance to the language community. To overcome the issues with these benchmarks, we have created a dataset for te reo Māori (the Indigenous language of Aotearoa/New Zealand) to pursue NLP tools that are ‘fit-for-our-purpose’. This paper demonstrates how low-resourced, Indigenous languages can develop tailored, high-quality benchmarks that; i. Consider the impact of colonisation on their language; ii. Reflect the diversity of speakers in the language community; iii. Support the aspirations for the tools they are developing and their language revitalisation efforts.
Fit for our purpose, not yours: Benchmark for a low-resource, Indigenous language
[ "Suzanne Duncan", "Gianna Leoni", "Lee Steven", "Keoni Mahelona", "Peter-Lucas Jones" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=vyraA7xt4c
@inproceedings{ du2024mercury, title={Mercury: A Code Efficiency Benchmark for Code Large Language Models}, author={Mingzhe Du and Anh Tuan Luu and Bin Ji and Qian Liu and See-Kiong Ng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=vyraA7xt4c} }
Amidst the recent strides in evaluating Large Language Models for Code (Code LLMs), existing benchmarks have mainly focused on the functional correctness of generated code, neglecting the importance of their computational efficiency. To fill the gap, we present Mercury, the first code efficiency benchmark for Code LLMs. It comprises 1,889 Python tasks, each accompanied by adequate solutions that serve as real-world efficiency baselines, enabling a comprehensive analysis of the runtime distribution. Based on the distribution, we introduce a new metric Beyond, which computes a runtime-percentile-weighted Pass score to reflect functional correctness and code efficiency simultaneously. On Mercury, leading Code LLMs can achieve 65% on Pass, while less than 50% on Beyond. Given that an ideal Beyond score would be aligned with the Pass score, it indicates that while Code LLMs exhibit impressive capabilities in generating functionally correct code, there remains a notable gap in their efficiency. Finally, our empirical experiments reveal that Direct Preference Optimization (DPO) serves as a robust baseline for enhancing code efficiency compared with Supervised Fine Tuning (SFT), which paves a promising avenue for future exploration of efficient code generation. Our code and data are available on GitHub: https://github.com/Elfsong/Mercury.
Mercury: A Code Efficiency Benchmark for Code Large Language Models
[ "Mingzhe Du", "Anh Tuan Luu", "Bin Ji", "Qian Liu", "See-Kiong Ng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.07844
[ "https://github.com/elfsong/mercury" ]
https://huggingface.co/papers/2402.07844
0
1
0
4
[]
[ "Elfsong/Mercury" ]
[]
[]
[ "Elfsong/Mercury" ]
[]
1
null
https://openreview.net/forum?id=vvyUa3CDwt
@inproceedings{ gr{\"o}ger2024intrinsic, title={Intrinsic Self-Supervision for Data Quality Audits}, author={Fabian Gr{\"o}ger and Simone Lionetti and Philippe Gottfrois and Alvaro Gonzalez-Jimenez and Ludovic Amruthalingam and Matthew Groh and Alexander A. Navarini and Marc Pouly}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=vvyUa3CDwt} }
Benchmark datasets in computer vision often contain off-topic images, near duplicates, and label errors, leading to inaccurate estimates of model performance. In this paper, we revisit the task of data cleaning and formalize it as either a ranking problem, which significantly reduces human inspection effort, or a scoring problem, which allows for automated decisions based on score distributions. We find that a specific combination of context-aware self-supervised representation learning and distance-based indicators is effective in finding issues without annotation biases. This methodology, which we call SelfClean, surpasses state-of-the-art performance in detecting off-topic images, near duplicates, and label errors within widely-used image datasets, such as ImageNet-1k, Food-101N, and STL-10, both for synthetic issues and real contamination. We apply the detailed method to multiple image benchmarks, identify up to 16% of issues, and confirm an improvement in evaluation reliability upon cleaning. The official implementation can be found at: https://github.com/Digital-Dermatology/SelfClean.
Intrinsic Self-Supervision for Data Quality Audits
[ "Fabian Gröger", "Simone Lionetti", "Philippe Gottfrois", "Alvaro Gonzalez-Jimenez", "Ludovic Amruthalingam", "Matthew Groh", "Alexander A. Navarini", "Marc Pouly" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2305.17048
[ "https://github.com/Digital-Dermatology/SelfClean-Evaluation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=vlUK2h1Nvw
@inproceedings{ farebrother2024cale, title={{CALE}: Continuous Arcade Learning Environment}, author={Jesse Farebrother and Pablo Samuel Castro}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=vlUK2h1Nvw} }
We introduce the Continuous Arcade Learning Environment (CALE), an extension of the well-known Arcade Learning Environment (ALE) [Bellemare et al., 2013]. The CALE uses the same underlying emulator of the Atari 2600 gaming system (Stella), but adds support for continuous actions. This enables the benchmarking and evaluation of continuous-control agents (such as PPO [Schulman et al., 2017] and SAC [Haarnoja et al., 2018]) and value-based agents (such as DQN [Mnih et al., 2015] and Rainbow [Hessel et al., 2018]) on the same environment suite. We provide a series of open questions and research directions that CALE enables, as well as initial baseline results using Soft Actor-Critic. CALE is available as part of the ALE athttps://github.com/Farama-Foundation/Arcade-Learning-Environment.
CALE: Continuous Arcade Learning Environment
[ "Jesse Farebrother", "Pablo Samuel Castro" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.23810
[ "https://github.com/farama-foundation/arcade-learning-environment" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=vfju5hjrJw
@inproceedings{ zhong2024comback, title={ComBack: A Versatile Dataset for Enhancing Compiler Backend Development Efficiency}, author={Ming Zhong and FANG LYU and Lulin Wang and Hongna Geng and Lei Qiu and Huimin Cui and Xiaobing Feng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=vfju5hjrJw} }
Compiler backends are tasked with generating executable machine code for processors. With the proliferation of diverse processors, it is imperative for programmers to tailor specific compiler backends to accommodate each one. Meanwhile, compiler backend development is a laborious and time-consuming task, lacking effective automation methods. Although language models have demonstrated strong abilities in code related tasks, the lack of appropriate datasets for compiler backend development limits the application of language models in this field. In this paper, we introduce ComBack, the first public dataset designed for improving compiler backend development capabilities of language models. ComBack includes 178 backends for mainstream compilers and three tasks including statement-level completion, next-statement suggestion and code generation, representing common development scenarios. We conducted experiments by fine-tuning six pre-trained language models with ComBack, demonstrating its effectiveness in enhancing model accuracy across the three tasks. We further evaluated the top-performing model(CodeT5+) across the three tasks for new targets, comparing its accuracy with conventional methods (Fork-Flow), ChatGPT-3.5-Turbo, and Code-LLaMA-34B-Instruct. Remarkably, fine-tuned CodeT5+ with only 220M parameters on ComBack outperformed Fork-Flow methods significantly and surpassed ChatGPT and Code-LLaMA. This suggests potential efficiency improvements in compiler development. ComBack is avaliable at https://huggingface.co/datasets/docz-ict/ComBack.
ComBack: A Versatile Dataset for Enhancing Compiler Backend Development Efficiency
[ "Ming Zhong", "FANG LYU", "Lulin Wang", "Hongna Geng", "Lei Qiu", "Huimin Cui", "Xiaobing Feng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=vecFROHnL4
@inproceedings{ jiang2024marvel, title={{MARVEL}: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning}, author={Yifan Jiang and Jiarui Zhang and Kexuan Sun and Zhivar Sourati and Kian Ahrabian and Kaixin Ma and Filip Ilievski and Jay Pujara}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=vecFROHnL4} }
While multi-modal large language models (MLLMs) have shown significant progress across popular visual reasoning benchmarks, whether they possess abstract visual reasoning abilities remains an open question. Similar to the Sudoku puzzles, abstract visual reasoning (AVR) problems require finding high-level patterns (e.g., repetition constraints on numbers) that control the input shapes (e.g., digits) in a specific task configuration (e.g., matrix). However, existing AVR benchmarks only consider a limited set of patterns (addition, conjunction), input shapes (rectangle, square), and task configurations (3 × 3 matrices). And they fail to capture all abstract reasoning patterns in human cognition necessary for addressing real-world tasks, such as geometric properties and object boundary understanding in real-world navigation. To evaluate MLLMs’ AVR abilities systematically, we introduce MARVEL founded on the core knowledge system in human cognition, a multi-dimensional AVR benchmark with 770 puzzles composed of six core knowledge patterns, geometric and abstract shapes, and five different task configurations. To inspect whether the model performance is grounded in perception or reasoning, MARVEL complements the standard AVR question with perception questions in a hierarchical evaluation framework. We conduct comprehensive experiments on MARVEL with ten representative MLLMs in zero-shot and few-shot settings. Our experiments reveal that all MLLMs show near-random performance on MARVEL, with significant performance gaps (40%) compared to humans across all patterns and task configurations. Further analysis of perception questions reveals that MLLMs struggle to comprehend the visual features (near-random performance). Although closed-source MLLMs, such as GPT-4V, show a promising understanding of reasoning patterns (on par with humans) after adding textual descriptions, this advantage is hindered by their weak perception abilities. We release our entire code and dataset at https://github.com/1171-jpg/MARVEL_AVR.
MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning
[ "Yifan Jiang", "Jiarui Zhang", "Kexuan Sun", "Zhivar Sourati", "Kian Ahrabian", "Kaixin Ma", "Filip Ilievski", "Jay Pujara" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2404.13591
[ "https://github.com/1171-jpg/marvel_avr" ]
https://huggingface.co/papers/2404.13591
2
2
0
8
[]
[ "kianasun/MARVEL" ]
[]
[]
[ "kianasun/MARVEL" ]
[]
1
null
https://openreview.net/forum?id=vXnGXRbOfb
@inproceedings{ zhang2024towards, title={Towards Open Respiratory Acoustic Foundation Models: Pretraining and Benchmarking}, author={Yuwei Zhang and Tong Xia and Jing Han and Yu Wu and Georgios Rizos and Yang Liu and Mohammed Mosuily and Jagmohan Chauhan and Cecilia Mascolo}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=vXnGXRbOfb} }
Respiratory audio, such as coughing and breathing sounds, has predictive power for a wide range of healthcare applications, yet is currently under-explored. The main problem for those applications arises from the difficulty in collecting large labeled task-specific data for model development. Generalizable respiratory acoustic foundation models pretrained with unlabeled data would offer appealing advantages and possibly unlock this impasse. However, given the safety-critical nature of healthcare applications, it is pivotal to also ensure openness and replicability for any proposed foundation model solution. To this end, we introduce OPERA, an OPEn Respiratory Acoustic foundation model pretraining and benchmarking system, as the first approach answering this need. We curate large-scale respiratory audio datasets ($\sim$136K samples, over 400 hours), pretrain three pioneering foundation models, and build a benchmark consisting of 19 downstream respiratory health tasks for evaluation. Our pretrained models demonstrate superior performance (against existing acoustic models pretrained with general audio on 16 out of 19 tasks) and generalizability (to unseen datasets and new respiratory audio modalities). This highlights the great promise of respiratory acoustic foundation models and encourages more studies using OPERA as an open resource to accelerate research on respiratory audio for health. The system is accessible from https://github.com/evelyn0414/OPERA.
Towards Open Respiratory Acoustic Foundation Models: Pretraining and Benchmarking
[ "Yuwei Zhang", "Tong Xia", "Jing Han", "Yu Wu", "Georgios Rizos", "Yang Liu", "Mohammed Mosuily", "Jagmohan Chauhan", "Cecilia Mascolo" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.16148
[ "https://github.com/evelyn0414/opera" ]
https://huggingface.co/papers/2406.16148
0
0
0
9
[ "evelyn0414/OPERA" ]
[]
[]
[ "evelyn0414/OPERA" ]
[]
[]
1
null
https://openreview.net/forum?id=vJaWizbBdA
@inproceedings{ oh2024erbench, title={{ERB}ench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models}, author={Jio Oh and Soyeon Kim and Junseok Seo and Jindong Wang and Ruochen Xu and Xing Xie and Steven Euijong Whang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=vJaWizbBdA} }
Large language models (LLMs) have achieved unprecedented performances in various applications, yet evaluating them is still challenging. Existing benchmarks are either manually constructed or are automatic, but lack the ability to evaluate the thought process of LLMs with arbitrary complexity. We contend that utilizing existing relational databases based on the entity-relationship (ER) model is a promising approach for constructing benchmarks as they contain structured knowledge that can be used to question LLMs. Unlike knowledge graphs, which are also used to evaluate LLMs, relational databases have integrity constraints that can be used to better construct complex in-depth questions and verify answers: (1) functional dependencies can be used to pinpoint critical keywords that an LLM must know to properly answer a given question containing certain attribute values; and (2) foreign key constraints can be used to join relations and construct multi-hop questions, which can be arbitrarily long and used to debug intermediate answers. We thus propose ERBench, which uses these integrity constraints to convert any database into an LLM benchmark. ERBench supports continuous evaluation as databases change, multimodal questions, and various prompt engineering techniques. In our experiments, we construct LLM benchmarks using databases of multiple domains and make an extensive comparison of contemporary LLMs. We show how ERBench can properly evaluate any LLM by not only checking for answer correctness, but also effectively verifying the rationales by looking for the right keywords.
ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models
[ "Jio Oh", "Soyeon Kim", "Junseok Seo", "Jindong Wang", "Ruochen Xu", "Xing Xie", "Steven Euijong Whang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2403.05266
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=urjPCYZt0I
@inproceedings{ chao2024jailbreakbench, title={JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models}, author={Patrick Chao and Edoardo Debenedetti and Alexander Robey and Maksym Andriushchenko and Francesco Croce and Vikash Sehwag and Edgar Dobriban and Nicolas Flammarion and George J. Pappas and Florian Tram{\`e}r and Hamed Hassani and Eric Wong}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=urjPCYZt0I} }
Jailbreak attacks cause large language models (LLMs) to generate harmful, unethical, or otherwise objectionable content. Evaluating these attacks presents a number of challenges, which the current collection of benchmarks and evaluation techniques do not adequately address. First, there is no clear standard of practice regarding jailbreaking evaluation. Second, existing works compute costs and success rates in incomparable ways. And third, numerous works are not reproducible, as they withhold adversarial prompts, involve closed-source code, or rely on evolving proprietary APIs. To address these challenges, we introduce JailbreakBench, an open-sourced benchmark with the following components: (1) an evolving repository of state-of-the-art adversarial prompts, which we refer to as *jailbreak artifacts*; (2) a jailbreaking dataset comprising 100 behaviors---both original and sourced from prior work---which align with OpenAI's usage policies; (3) a standardized evaluation framework at https://github.com/JailbreakBench/jailbreakbench that includes a clearly defined threat model, system prompts, chat templates, and scoring functions; and (4) a leaderboard at https://jailbreakbench.github.io/ that tracks the performance of attacks and defenses for various LLMs. We have carefully considered the potential ethical implications of releasing this benchmark, and believe that it will be a net positive for the community.
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
[ "Patrick Chao", "Edoardo Debenedetti", "Alexander Robey", "Maksym Andriushchenko", "Francesco Croce", "Vikash Sehwag", "Edgar Dobriban", "Nicolas Flammarion", "George J. Pappas", "Florian Tramèr", "Hamed Hassani", "Eric Wong" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2404.01318
[ "https://github.com/jailbreakbench/artifacts" ]
https://huggingface.co/papers/2404.01318
2
0
0
12
[]
[ "JailbreakBench/JBB-Behaviors", "walledai/JailbreakBench", "dynamoai/safe_eval" ]
[]
[]
[ "JailbreakBench/JBB-Behaviors", "walledai/JailbreakBench", "dynamoai/safe_eval" ]
[]
1
null
https://openreview.net/forum?id=urJyyMKs7E
@inproceedings{ sukthanker2024hwgptbench, title={{HW}-{GPT}-Bench: Hardware-Aware Architecture Benchmark for Language Models}, author={Rhea Sanjay Sukthanker and Arber Zela and Benedikt Staffler and Aaron Klein and Lennart Purucker and J{\"o}rg K.H. Franke and Frank Hutter}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=urJyyMKs7E} }
The increasing size of language models necessitates a thorough analysis across multiple dimensions to assess trade-offs among crucial hardware metrics such as latency, energy consumption, GPU memory usage, and performance. Identifying optimal model configurations under specific hardware constraints is becoming essential but remains challenging due to the computational load of exhaustive training and evaluation on multiple devices. To address this, we introduce HW-GPT-Bench, a hardware-aware benchmark that utilizes surrogate predictions to approximate various hardware metrics across 13 devices of architectures in the GPT-2 family, with architectures containing up to 1.55B parameters. Our surrogates, via calibrated predictions and reliable uncertainty estimates, faithfully model the heteroscedastic noise inherent in the energy and latency measurements. To estimate perplexity, we employ weight-sharing techniques from Neural Architecture Search (NAS), inheriting pretrained weights from the largest GPT-2 model. Finally, we demonstrate the utility of HW-GPT-Bench by simulating optimization trajectories of various multi-objective optimization algorithms in just a few seconds.
HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models
[ "Rhea Sanjay Sukthanker", "Arber Zela", "Benedikt Staffler", "Aaron Klein", "Lennart Purucker", "Jörg K.H. Franke", "Frank Hutter" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2405.10299
[ "https://github.com/automl/hw-aware-llm-bench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=uKqn1Flsbp
@inproceedings{ barsellotti2024personalized, title={Personalized Instance-based Navigation Toward User-Specific Objects in Realistic Environments}, author={Luca Barsellotti and Roberto Bigazzi and Marcella Cornia and Lorenzo Baraldi and Rita Cucchiara}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=uKqn1Flsbp} }
In the last years, the research interest in visual navigation towards objects in indoor environments has grown significantly. This growth can be attributed to the recent availability of large navigation datasets in photo-realistic simulated environments, like Gibson and Matterport3D. However, the navigation tasks supported by these datasets are often restricted to the objects present in the environment at acquisition time. Also, they fail to account for the realistic scenario in which the target object is a user-specific instance that can be easily confused with similar objects and may be found in multiple locations within the environment. To address these limitations, we propose a new task denominated Personalized Instance-based Navigation (PIN), in which an embodied agent is tasked with locating and reaching a specific personal object by distinguishing it among multiple instances of the same category. The task is accompanied by PInNED, a dedicated new dataset composed of photo-realistic scenes augmented with additional 3D objects. In each episode, the target object is presented to the agent using two modalities: a set of visual reference images on a neutral background and manually annotated textual descriptions. Through comprehensive evaluations and analyses, we showcase the challenges of the PIN task as well as the performance and shortcomings of currently available methods designed for object-driven navigation, considering modular and end-to-end agents.
Personalized Instance-based Navigation Toward User-Specific Objects in Realistic Environments
[ "Luca Barsellotti", "Roberto Bigazzi", "Marcella Cornia", "Lorenzo Baraldi", "Rita Cucchiara" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.18195
[ "https://github.com/aimagelab/pin" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=u7m2CG84BQ
@inproceedings{ kuratov2024babilong, title={{BABIL}ong: Testing the Limits of {LLM}s with Long Context Reasoning-in-a-Haystack}, author={Yuri Kuratov and Aydar Bulatov and Petr Anokhin and Ivan Rodkin and Dmitry Igorevich Sorokin and Artyom Sorokin and Mikhail Burtsev}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=u7m2CG84BQ} }
In recent years, the input context sizes of large language models (LLMs) have increased dramatically. However, existing evaluation methods have not kept pace, failing to comprehensively assess the efficiency of models in handling long contexts. To bridge this gap, we introduce the BABILong benchmark, designed to test language models' ability to reason across facts distributed in extremely long documents. BABILong includes a diverse set of 20 reasoning tasks, including fact chaining, simple induction, deduction, counting, and handling lists/sets. These tasks are challenging on their own, and even more demanding when the required facts are scattered across long natural text. Our evaluations show that popular LLMs effectively utilize only 10-20% of the context and their performance declines sharply with increased reasoning complexity. Among alternatives to in-context reasoning, Retrieval-Augmented Generation methods achieve a modest 60% accuracy on single-fact question answering, independent of context length. Among context extension methods, the highest performance is demonstrated by recurrent memory transformers after fine-tuning, enabling the processing of lengths up to 50 million tokens. The BABILong benchmark is extendable to any length to support the evaluation of new upcoming models with increased capabilities, and we provide splits up to 10 million token lengths.
BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack
[ "Yuri Kuratov", "Aydar Bulatov", "Petr Anokhin", "Ivan Rodkin", "Dmitry Igorevich Sorokin", "Artyom Sorokin", "Mikhail Burtsev" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.10149
[ "https://github.com/booydar/babilong" ]
https://huggingface.co/papers/2406.10149
6
48
4
7
[]
[ "RMT-team/babilong", "RMT-team/babilong-1k-samples" ]
[ "RMT-team/babilong" ]
[]
[ "RMT-team/babilong", "RMT-team/babilong-1k-samples" ]
[ "RMT-team/babilong" ]
1
null
https://openreview.net/forum?id=twFlD3C9Rt
@inproceedings{ castillo-bolado2024beyond, title={Beyond Prompts: Dynamic Conversational Benchmarking of Large Language Models}, author={David Castillo-Bolado and Joseph Davidson and Finlay Gray and Marek Rosa}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=twFlD3C9Rt} }
We introduce a dynamic benchmarking system for conversational agents that evaluates their performance through a single, simulated, and lengthy user$\leftrightarrow$agent interaction. The interaction is a conversation between the user and agent, where multiple tasks are introduced and then undertaken concurrently. We context switch regularly to interleave the tasks, which constructs a realistic testing scenario in which we assess the Long-Term Memory, Continual Learning, and Information Integration capabilities of the agents. Results from both proprietary and open-source Large-Language Models show that LLMs in general perform well on single-task interactions, but they struggle on the same tasks when they are interleaved. Notably, short-context LLMs supplemented with an LTM system perform as well as or better than those with larger contexts. Our benchmark suggests that there are other challenges for LLMs responding to more natural interactions that contemporary benchmarks have heretofore not been able to capture.
Beyond Prompts: Dynamic Conversational Benchmarking of Large Language Models
[ "David Castillo-Bolado", "Joseph Davidson", "Finlay Gray", "Marek Rosa" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.20222
[ "https://github.com/GoodAI/goodai-ltm-benchmark" ]
https://huggingface.co/papers/2409.20222
0
0
0
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=tllpLtt14h
@inproceedings{ blacher2024einsum, title={Einsum Benchmark: Enabling the Development of Next-Generation Tensor Execution Engines}, author={Mark Blacher and Christoph Staudt and Julien Klaus and Maurice Wenig and Niklas Merk and Alexander Breuer and Max Engel and S{\"o}ren Laue and Joachim Giesen}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=tllpLtt14h} }
Modern artificial intelligence and machine learning workflows rely on efficient tensor libraries. However, tuning tensor libraries without considering the actual problems they are meant to execute can lead to a mismatch between expected performance and the actual performance. Einsum libraries are tuned to efficiently execute tensor expressions with only a few, relatively large, dense, floating-point tensors. But, practical applications of einsum cover a much broader range of tensor expressions than those that can currently be executed efficiently. For this reason, we have created a benchmark dataset that encompasses this broad range of tensor expressions, allowing future implementations of einsum to build upon and be evaluated against. In addition, we also provide generators for einsum expressions and converters to einsum expressions in our repository, so that additional data can be generated as needed. The benchmark dataset, the generators and converters are released openly and are publicly available at https://benchmark.einsum.org.
Einsum Benchmark: Enabling the Development of Next-Generation Tensor Execution Engines
[ "Mark Blacher", "Christoph Staudt", "Julien Klaus", "Maurice Wenig", "Niklas Merk", "Alexander Breuer", "Max Engel", "Sören Laue", "Joachim Giesen" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tU8Xgybudy
@inproceedings{ bonnen2024humanlevel, title={Human-level shape inferences: A benchmark for evaluating the 3D understanding of vision models}, author={tyler bonnen and Stephanie Fu and Yutong Bai and Thomas O'Connell and Yoni Friedman and Nancy Kanwisher and Joshua B. Tenenbaum and Alexei A Efros}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=tU8Xgybudy} }
We introduce a benchmark to directly evaluate the alignment between human observers and vision models on a 3D shape inference task. We leverage an experimental design from the cognitive sciences: given a set of images, participants identify which contain the same/different objects, despite considerable viewpoint variation. We draw from a diverse range of images that include common objects (e.g., chairs) as well as abstract shapes (i.e., procedurally generated 'nonsense' objects). After constructing over 2000 unique image sets, we administer these tasks to human participants, collecting 35K trials of behavioral data from over 500 participants. This includes explicit choice behaviors as well as intermediate measures, such as reaction time and gaze data. We then evaluate the performance of common vision models (e.g., DINOv2, MAE, CLIP). We find that humans outperform all models by a wide margin. Using a multi-scale evaluation approach, we identify underlying similarities and differences between models and humans: while human-model performance is correlated, humans allocate more time/processing on challenging trials. All images, data, and code can be accessed via our project page.
Evaluating Multiview Object Consistency in Humans and Image Models
[ "tyler bonnen", "Stephanie Fu", "Yutong Bai", "Thomas O'Connell", "Yoni Friedman", "Nancy Kanwisher", "Joshua B. Tenenbaum", "Alexei A Efros" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.05862
[ "https://github.com/tzler/mochi_code" ]
https://huggingface.co/papers/2409.05862
5
8
2
8
[]
[ "tzler/MOCHI" ]
[]
[]
[ "tzler/MOCHI" ]
[]
1
null
https://openreview.net/forum?id=tPsw4NeLZx
@inproceedings{ shen2024mmwlauslan, title={{MM}-{WLA}uslan: Multi-View Multi-Modal Word-Level Australian Sign Language Recognition Dataset}, author={Xin Shen and Heming Du and Hongwei Sheng and Shuyun Wang and Hui Chen and Huiqiang Chen and Zhuojie Wu and Xiaobiao Du and Jiaying Ying and Ruihan Lu and Qingzheng Xu and Xin Yu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=tPsw4NeLZx} }
Isolated Sign Language Recognition (ISLR) focuses on identifying individual sign language glosses. Considering the diversity of sign languages across geographical regions, developing region-specific ISLR datasets is crucial for supporting communication and research. Auslan, as a sign language specific to Australia, still lacks a dedicated large-scale word-level dataset for the ISLR task. To fill this gap, we curate \underline{\textbf{the first}} large-scale Multi-view Multi-modal Word-Level Australian Sign Language recognition dataset, dubbed MM-WLAuslan. Compared to other publicly available datasets, MM-WLAuslan exhibits three significant advantages: (1) **the largest amount** of data, (2) **the most extensive** vocabulary, and (3) **the most diverse** of multi-modal camera views. Specifically, we record **282K+** sign videos covering **3,215** commonly used Auslan glosses presented by **73** signers in a studio environment. Moreover, our filming system includes two different types of cameras, i.e., three Kinect-V2 cameras and a RealSense camera. We position cameras hemispherically around the front half of the model and simultaneously record videos using all four cameras. Furthermore, we benchmark results with state-of-the-art methods for various multi-modal ISLR settings on MM-WLAuslan, including multi-view, cross-camera, and cross-view. Experiment results indicate that MM-WLAuslan is a challenging ISLR dataset, and we hope this dataset will contribute to the development of Auslan and the advancement of sign languages worldwide. All datasets and benchmarks are available at MM-WLAuslan.
MM-WLAuslan: Multi-View Multi-Modal Word-Level Australian Sign Language Recognition Dataset
[ "Xin Shen", "Heming Du", "Hongwei Sheng", "Shuyun Wang", "Hui Chen", "Huiqiang Chen", "Zhuojie Wu", "Xiaobiao Du", "Jiaying Ying", "Ruihan Lu", "Qingzheng Xu", "Xin Yu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.19488
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tNCdnpEKrR
@inproceedings{ chen2024qgym, title={{QG}ym: Scalable Simulation and Benchmarking of Queuing Network Controllers}, author={Haozhe Chen and Ang Li and Ethan Che and Jing Dong and Tianyi Peng and Hongseok Namkoong}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=tNCdnpEKrR} }
Queuing network control allows allocation of scarce resources to manage congestion, a fundamental problem in manufacturing, communications, and healthcare. Compared to standard RL problems, queueing problems are distinguished by unique challenges: i) a system operating in continuous time, ii) high stochasticity, and iii) long horizons over which the system can become unstable (exploding delays). To provide the empirical foundations for methodological development tackling these challenges, we present an open-sourced queueing simulation framework, QGym, that benchmark queueing policies across realistic problem instances. Our modular framework allows the researchers to build on our initial instances, which provide a wide range of environments including parallel servers, criss-cross, tandem, and re-entrant networks, as well as a realistically calibrated hospital queuing system. From these, various policies can be easily tested, including both model-free RL methods and classical queuing policies. Our testbed significantly expands the scope of empirical benchmarking in prior work, and complements the traditional focus on evaluating algorithms based on mathematical guarantees in idealized settings. QGym code is open-sourced at https://github.com/namkoong-lab/QGym.
QGym: Scalable Simulation and Benchmarking of Queuing Network Controllers
[ "Haozhe Chen", "Ang Li", "Ethan Che", "Jing Dong", "Tianyi Peng", "Hongseok Namkoong" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.06170
[ "https://github.com/namkoong-lab/qgym" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=tN61DTr4Ed
@inproceedings{ xie2024osworld, title={{OSW}orld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments}, author={Tianbao Xie and Danyang Zhang and Jixuan Chen and Xiaochuan Li and Siheng Zhao and Ruisheng Cao and Toh Jing Hua and Zhoujun Cheng and Dongchan Shin and Fangyu Lei and Yitao Liu and Yiheng Xu and Shuyan Zhou and Silvio Savarese and Caiming Xiong and Victor Zhong and Tao Yu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=tN61DTr4Ed} }
Autonomous agents that accomplish complex computer tasks with minimal human interventions have the potential to transform human-computer interaction, significantly enhancing accessibility and productivity. However, existing benchmarks either lack an interactive environment or are limited to environments specific to certain applications or domains, failing to reflect the diverse and complex nature of real-world computer use, thereby limiting the scope of tasks and agent scalability. To address this issue, we introduce OSWorld, the first-of-its-kind scalable, real computer environment for multimodal agents, supporting task setup, execution-based evaluation, and interactive learning across various operating systems such as Ubuntu, Windows, and macOS. OSWorld can serve as a unified, integrated computer environment for assessing open-ended computer tasks that involve arbitrary applications. Building upon OSWorld, we create a benchmark of 369 computer tasks involving real web and desktop apps in open domains, OS file I/O, and workflows spanning multiple applications. Each task example is derived from real-world computer use cases and includes a detailed initial state setup configuration and a custom execution-based evaluation script for reliable, reproducible evaluation. Extensive evaluation of state-of-the-art LLM/VLM-based agents on OSWorld reveals significant deficiencies in their ability to serve as computer assistants. While humans can accomplish over 72.36% of the tasks, the best model achieves only 12.24% success, primarily struggling with GUI grounding and operational knowledge. Comprehensive analysis using OSWorld provides valuable insights for developing multimodal generalist agents that were not possible with previous benchmarks. Our code, environment, baseline models, and data are publicly available at [this https URL](https://os-world.github.io/).
OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
[ "Tianbao Xie", "Danyang Zhang", "Jixuan Chen", "Xiaochuan Li", "Siheng Zhao", "Ruisheng Cao", "Toh Jing Hua", "Zhoujun Cheng", "Dongchan Shin", "Fangyu Lei", "Yitao Liu", "Yiheng Xu", "Shuyan Zhou", "Silvio Savarese", "Caiming Xiong", "Victor Zhong", "Tao Yu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2404.07972
[ "" ]
https://huggingface.co/papers/2404.07972
8
46
1
17
[]
[ "xlangai/ubuntu_osworld", "xlangai/windows_osworld" ]
[]
[]
[ "xlangai/ubuntu_osworld", "xlangai/windows_osworld" ]
[]
1
null
https://openreview.net/forum?id=t9aThFL1lE
@inproceedings{ zhang2024unlearncanvas, title={UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models}, author={Yihua Zhang and Chongyu Fan and Yimeng Zhang and Yuguang Yao and Jinghan Jia and Jiancheng Liu and Gaoyuan Zhang and Gaowen Liu and Ramana Rao Kompella and Xiaoming Liu and Sijia Liu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=t9aThFL1lE} }
The technological advancements in diffusion models (DMs) have demonstrated unprecedented capabilities in text-to-image generation and are widely used in diverse applications. However, they have also raised significant societal concerns, such as the generation of harmful content and copyright disputes. Machine unlearning (MU) has emerged as a promising solution, capable of removing undesired generative capabilities from DMs. However, existing MU evaluation systems present several key challenges that can result in incomplete and inaccurate assessments. To address these issues, we propose UnlearnCanvas, a comprehensive high-resolution stylized image dataset that facilitates the evaluation of the unlearning of artistic styles and associated objects. This dataset enables the establishment of a standardized, automated evaluation framework with 7 quantitative metrics assessing various aspects of the unlearning performance for DMs. Through extensive experiments, we benchmark 9 state-of-the-art MU methods for DMs, revealing novel insights into their strengths, weaknesses, and underlying mechanisms. Additionally, we explore challenging unlearning scenarios for DMs to evaluate worst-case performance against adversarial prompts, the unlearning of finer-scale concepts, and sequential unlearning. We hope that this study can pave the way for developing more effective, accurate, and robust DM unlearning methods, ensuring safer and more ethical applications of DMs in the future. The dataset, benchmark, and codes are publicly available at this [link](https://unlearn-canvas.netlify.app/).
UnlearnCanvas: Stylized Image Dataset for Enhanced Machine Unlearning Evaluation in Diffusion Models
[ "Yihua Zhang", "Chongyu Fan", "Yimeng Zhang", "Yuguang Yao", "Jinghan Jia", "Jiancheng Liu", "Gaoyuan Zhang", "Gaowen Liu", "Ramana Rao Kompella", "Xiaoming Liu", "Sijia Liu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "https://github.com/optml-group/unlearncanvas" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t7xYNN7RJC
@inproceedings{ peterson2024vocal, title={Vocal Call Locator Benchmark ({VCL}) for localizing rodent vocalizations from multi-channel audio}, author={Ralph E Peterson and Aramis Tanelus and Christopher A. Ick and Bartul Mimica and M J Niegil Francis and Violet Jane Ivan and Aman Choudhri and Annegret Falkner and Mala Murthy and David M Schneider and Dan H. Sanes and Alex H Williams}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=t7xYNN7RJC} }
Understanding the behavioral and neural dynamics of social interactions is a goal of contemporary neuroscience. Many machine learning methods have emerged in recent years to make sense of complex video and neurophysiological data that result from these experiments. Less focus has been placed on understanding how animals process acoustic information, including social vocalizations. A critical step to bridge this gap is determining the senders and receivers of acoustic infor- mation in social interactions. While sound source localization (SSL) is a classic problem in signal processing, existing approaches are limited in their ability to localize animal-generated sounds in standard laboratory environments. Advances in deep learning methods for SSL are likely to help address these limitations, however there are currently no publicly available models, datasets, or benchmarks to systematically evaluate SSL algorithms in the domain of bioacoustics. Here, we present the VCL Benchmark: the first large-scale dataset for benchmarking SSL algorithms in rodents. We acquired synchronized video and multi-channel audio recordings of 767,295 sounds with annotated ground truth sources across 9 conditions. The dataset provides benchmarks which evaluate SSL performance on real data, simulated acoustic data, and a mixture of real and simulated data. We intend for this benchmark to facilitate knowledge transfer between the neuroscience and acoustic machine learning communities, which have had limited overlap.
Vocal Call Locator Benchmark (VCL) for localizing rodent vocalizations from multi-channel audio
[ "Ralph E Peterson", "Aramis Tanelus", "Christopher A. Ick", "Bartul Mimica", "M J Niegil Francis", "Violet Jane Ivan", "Aman Choudhri", "Annegret Falkner", "Mala Murthy", "David M Schneider", "Dan H. Sanes", "Alex H Williams" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t6LQXcFTEn
@inproceedings{ liu2024audiomarkbench, title={AudioMarkBench: Benchmarking Robustness of Audio Watermarking}, author={Hongbin Liu and Moyang Guo and Zhengyuan Jiang and Lun Wang and Neil Zhenqiang Gong}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=t6LQXcFTEn} }
The increasing realism of synthetic speech, driven by advancements in text-to-speech models, raises ethical concerns regarding impersonation and disinformation. Audio watermarking offers a promising solution via embedding human-imperceptible watermarks into AI-generated audios. However, the robustness of audio watermarking against common/adversarial perturbations remains understudied. We present AudioMarkBench, the first systematic benchmark for evaluating the robustness of audio watermarking against *watermark removal* and *watermark forgery*. AudioMarkBench includes a new dataset created from Common-Voice across languages, biological sexes, and ages, 3 state-of-the-art watermarking methods, and 15 types of perturbations. We benchmark the robustness of these methods against the perturbations in no-box, black-box, and white-box settings. Our findings highlight the vulnerabilities of current watermarking techniques and emphasize the need for more robust and fair audio watermarking solutions. Our dataset and code are publicly available at https://github.com/moyangkuo/AudioMarkBench.
AudioMarkBench: Benchmarking Robustness of Audio Watermarking
[ "Hongbin Liu", "Moyang Guo", "Zhengyuan Jiang", "Lun Wang", "Neil Zhenqiang Gong" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.06979
[ "https://github.com/moyangkuo/audiomarkbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=t1mAXb4Cop
@inproceedings{ guo2024can, title={Can {LLM}s Solve Molecule Puzzles? A Multimodal Benchmark for Molecular Structure Elucidation}, author={Kehan Guo and Bozhao Nan and Yujun Zhou and Taicheng Guo and Zhichun Guo and Mihir Surve and Zhenwen Liang and Nitesh V Chawla and Olaf Wiest and Xiangliang Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=t1mAXb4Cop} }
Large Language Models (LLMs) have shown significant problem-solving capabilities across predictive and generative tasks in chemistry. However, their proficiency in multi-step chemical reasoning remains underexplored. We introduce a new challenge: molecular structure elucidation, which involves deducing a molecule’s structure from various types of spectral data. Solving such a molecular puzzle, akin to solving crossword puzzles, poses reasoning challenges that require integrating clues from diverse sources and engaging in iterative hypothesis testing. To address this challenging problem with LLMs, we present \textbf{MolPuzzle}, a benchmark comprising 217 instances of structure elucidation, which feature over 23,000 QA samples presented in a sequential puzzle-solving process, involving three interlinked sub-tasks: molecule understanding, spectrum interpretation, and molecule construction. Our evaluation of 12 LLMs reveals that the best-performing LLM, GPT-4o, performs significantly worse than humans, with only a small portion (1.4\%) of its answers exactly matching the ground truth. However, it performs nearly perfectly in the first subtask of molecule understanding, achieving accuracy close to 100\%. This discrepancy highlights the potential of developing advanced LLMs with improved chemical reasoning capabilities in the other two sub-tasks. Our MolPuzzle dataset and evaluation code are available at this \href{https://github.com/KehanGuo2/MolPuzzle}{link}.
Can LLMs Solve Molecule Puzzles? A Multimodal Benchmark for Molecular Structure Elucidation
[ "Kehan Guo", "Bozhao Nan", "Yujun Zhou", "Taicheng Guo", "Zhichun Guo", "Mihir Surve", "Zhenwen Liang", "Nitesh V Chawla", "Olaf Wiest", "Xiangliang Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sw9iOHGxgm
@inproceedings{ krojer2024learning, title={Learning Action and Reasoning-Centric Image Editing from Videos and Simulation}, author={Benno Krojer and Dheeraj Vattikonda and Luis Lara and Varun Jampani and Eva Portelance and Christopher Pal and Siva Reddy}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=sw9iOHGxgm} }
An image editing model should be able to perform diverse edits, ranging from object replacement, changing attributes or style, to performing actions or movement, which require many forms of reasoning. Current *general* instruction-guided editing models have significant shortcomings with action and reasoning-centric edits. Object, attribute or stylistic changes can be learned from visually static datasets. On the other hand, high-quality data for action and reasoning-centric edits is scarce and has to come from entirely different sources that cover e.g. physical dynamics, temporality and spatial reasoning. To this end, we meticulously curate the **A**U**RO**R**A** Dataset (**A**ction-**R**easoning-**O**bject-**A**ttribute), a collection of high-quality training data, human-annotated and curated from videos and simulation engines. We focus on a key aspect of quality training data: triplets (source image, prompt, target image) contain a single meaningful visual change described by the prompt, i.e., *truly minimal* changes between source and target images. To demonstrate the value of our dataset, we evaluate an **A**U**RO**R**A**-finetuned model on a new expert-curated benchmark (**A**U**RO**R**A-Bench**) covering 8 diverse editing tasks. Our model significantly outperforms previous editing models as judged by human raters. For automatic evaluations, we find important flaws in previous metrics and caution their use for semantically hard editing tasks. Instead, we propose a new automatic metric that focuses on discriminative understanding. We hope that our efforts : (1) curating a quality training dataset and an evaluation benchmark, (2) developing critical evaluations, and (3) releasing a state-of-the-art model, will fuel further progress on general image editing.
Learning Action and Reasoning-Centric Image Editing from Videos and Simulation
[ "Benno Krojer", "Dheeraj Vattikonda", "Luis Lara", "Varun Jampani", "Eva Portelance", "Christopher Pal", "Siva Reddy" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "https://github.com/McGill-NLP/AURORA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=snNuvAOQxB
@inproceedings{ li2024meqa, title={{MEQA}: A Benchmark for Multi-hop Event-centric Question Answering with Explanations}, author={Ruosen Li and Zimu Wang and Son Quoc Tran and Lei Xia and Xinya Du}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=snNuvAOQxB} }
Existing benchmarks for multi-hop question answering (QA) primarily evaluate models based on their ability to reason about entities and the relationships between them. However, there's a lack of insight into how these models perform in terms of both events and entities. In this paper, we introduce a novel semi-automatic question generation strategy by composing event structures from information extraction (IE) datasets and present the first Multi-hop Event-centric Question Answering (MEQA) benchmark. It contains (1) 2,243 challenging questions that require a diverse range of complex reasoning over entity-entity, entity-event, and event-event relations; (2) corresponding multi-step QA-format event reasoning chain (explanation) which leads to the answer for each question. We also introduce two metrics for evaluating explanations: completeness and logical consistency. We conduct comprehensive benchmarking and analysis, which shows that MEQA is challenging for the latest state-of-the-art models encompassing large language models (LLMs); and how they fall short of providing faithful explanations of the event-centric reasoning process.
MEQA: A Benchmark for Multi-hop Event-centric Question Answering with Explanations
[ "Ruosen Li", "Zimu Wang", "Son Quoc Tran", "Lei Xia", "Xinya Du" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=smxQvTmdGS
@inproceedings{ liu2024bias, title={Bias and Volatility: A Statistical Framework for Evaluating Large Language Model's Stereotypes and the Associated Generation Inconsistency}, author={Yiran Liu and Ke Yang and Zehan Qi and Xiao Liu and Yang Yu and ChengXiang Zhai}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=smxQvTmdGS} }
We present a novel statistical framework for analyzing stereotypes in large language models (LLMs) by systematically estimating the bias and variation in their generation. Current evaluation metrics in the alignment literature often overlook the randomness of stereotypes caused by the inconsistent generative behavior of LLMs. For example, this inconsistency can result in LLMs displaying contradictory stereotypes, including those related to gender or race, for identical professions across varied contexts. Neglecting such inconsistency could lead to misleading conclusions in alignment evaluations and hinder the accurate assessment of the risk of LLM applications perpetuating or amplifying social stereotypes and unfairness. This work proposes a Bias-Volatility Framework (BVF) that estimates the probability distribution function of LLM stereotypes. Specifically, since the stereotype distribution fully captures an LLM's generation variation, BVF enables the assessment of both the likelihood and extent to which its outputs are against vulnerable groups, thereby allowing for the quantification of the LLM's aggregated discrimination risk. Furthermore, we introduce a mathematical framework to decompose an LLM’s aggregated discrimination risk into two components: bias risk and volatility risk, originating from the mean and variation of LLM’s stereotype distribution, respectively. We apply BVF to assess 12 commonly adopted LLMs and compare their risk levels. Our findings reveal that: i) Bias risk is the primary cause of discrimination risk in LLMs; ii) Most LLMs exhibit significant pro-male stereotypes for nearly all careers; iii) Alignment with reinforcement learning from human feedback lowers discrimination by reducing bias, but increases volatility; iv) Discrimination risk in LLMs correlates with key sociol-economic factors like professional salaries. Finally, we emphasize that BVF can also be used to assess other dimensions of generation inconsistency's impact on LLM behavior beyond stereotypes, such as knowledge mastery.
Bias and Volatility: A Statistical Framework for Evaluating Large Language Model's Stereotypes and the Associated Generation Inconsistency
[ "Yiran Liu", "Ke Yang", "Zehan Qi", "Xiao Liu", "Yang Yu", "ChengXiang Zhai" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=slqbOc67W8
@inproceedings{ trivedi2024melting, title={Melting Pot Contest: Charting the Future of Generalized Cooperative Intelligence}, author={Rakshit Trivedi and Akbir Khan and Jesse Clifton and Lewis Hammond and Edgar A. Du{\'e}{\~n}ez-Guzm{\'a}n and Dipam Chakraborty and John P Agapiou and Jayd Matyas and Sasha Vezhnevets and Barna P{\'a}sztor and Yunke Ao and Omar G. Younis and Jiawei Huang and Benjamin Swain and Haoyuan Qin and Mian Deng and Ziwei Deng and Utku Erdo{\u{g}}anaras and Yue Zhao and Marko Tesic and Natasha Jaques and Jakob Nicolaus Foerster and Vincent Conitzer and Jose Hernandez-Orallo and Dylan Hadfield-Menell and Joel Z Leibo}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=slqbOc67W8} }
Multi-agent AI research promises a path to develop human-like and human-compatible intelligent technologies that complement the solipsistic view of other approaches, which mostly do not consider interactions between agents. Aiming to make progress in this direction, the Melting Pot contest 2023 focused on the problem of cooperation among interacting agents and challenged researchers to push the boundaries of multi-agent reinforcement learning (MARL) for mixed-motive games. The contest leveraged the Melting Pot environment suite to rigorously evaluate how well agents can adapt their cooperative skills to interact with novel partners in unforeseen situations. Unlike other reinforcement learning challenges, this challenge focused on social rather than environmental generalization. In particular, a population of agents performs well in Melting Pot when its component individuals are adept at finding ways to cooperate both with others in their population and with strangers. Thus Melting Pot measures cooperative intelligence. The contest attracted over 600 participants across 100+ teams globally and was a success on multiple fronts: (i) it contributed to our goal of pushing the frontiers of MARL towards building more cooperatively intelligent agents, evidenced by several submissions that outperformed established baselines; (ii) it attracted a diverse range of participants, from independent researchers to industry affiliates and academic labs, both with strong background and new interest in the area alike, broadening the field’s demographic and intellectual diversity; and (iii) analyzing the submitted agents provided important insights, highlighting areas for improvement in evaluating agents' cooperative intelligence. This paper summarizes the design aspects and results of the contest and explores the potential of Melting Pot as a benchmark for studying Cooperative AI. We further analyze the top solutions and conclude with a discussion on promising directions for future research.
Melting Pot Contest: Charting the Future of Generalized Cooperative Intelligence
[ "Rakshit Trivedi", "Akbir Khan", "Jesse Clifton", "Lewis Hammond", "Edgar A. Duéñez-Guzmán", "Dipam Chakraborty", "John P Agapiou", "Jayd Matyas", "Sasha Vezhnevets", "Barna Pásztor", "Yunke Ao", "Omar G. Younis", "Jiawei Huang", "Benjamin Swain", "Haoyuan Qin", "Mian Deng", "Ziwei Deng", "Utku Erdoğanaras", "Yue Zhao", "Marko Tesic", "Natasha Jaques", "Jakob Nicolaus Foerster", "Vincent Conitzer", "Jose Hernandez-Orallo", "Dylan Hadfield-Menell", "Joel Z Leibo" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sLzD2rw9Ce
@inproceedings{ wang2024drivingdojo, title={DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model}, author={Yuqi Wang and Ke Cheng and Jiawei He and Qitai Wang and Hengchen Dai and Yuntao Chen and Fei Xia and Zhaoxiang Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=sLzD2rw9Ce} }
Driving world models have gained increasing attention due to their ability to model complex physical dynamics. However, their superb modeling capability is yet to be fully unleashed due to the limited video diversity in current driving datasets. We introduce DrivingDojo, the first dataset tailor-made for training interactive world models with complex driving dynamics. Our dataset features video clips with a complete set of driving maneuvers, diverse multi-agent interplay, and rich open-world driving knowledge, laying a stepping stone for future world model development. We further define an action instruction following (AIF) benchmark for world models and demonstrate the superiority of the proposed dataset for generating action-controlled future predictions.
DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model
[ "Yuqi Wang", "Ke Cheng", "Jiawei He", "Qitai Wang", "Hengchen Dai", "Yuntao Chen", "Fei Xia", "Zhaoxiang Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.10738
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sHBn3PNcwU
@inproceedings{ yang2024emgbench, title={{EMGB}ench: Benchmarking Out-of-Distribution Generalization and Adaptation for Electromyography}, author={Jehan Yang and Maxwell J. Soh and Vivianna Lieu and Douglas J Weber and Zackory Erickson}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=sHBn3PNcwU} }
This paper introduces the first generalization and adaptation benchmark using machine learning for evaluating out-of-distribution performance of electromyography (EMG) classification algorithms. The ability of an EMG classifier to handle inputs drawn from a different distribution than the training distribution is critical for real-world deployment as a control interface. By predicting the user’s intended gesture using EMG signals, we can create a wearable solution to control assistive technologies, such as computers, prosthetics, and mobile manipulator robots. This new out-of-distribution benchmark consists of two major tasks that have utility for building robust and adaptable control interfaces: 1) intersubject classification, and 2) adaptation using train-test splits for time-series. This benchmark spans nine datasets, the largest collection of EMG datasets in a benchmark. Among these, a new dataset is introduced, featuring a novel, easy-to-wear high-density EMG wearable for data collection. The lack of open-source benchmarks has made comparing accuracy results between papers challenging for the EMG research community. This new benchmark provides researchers with a valuable resource for analyzing practical measures of out-of-distribution performance for EMG datasets. Our code and data from our new dataset can be found at emgbench.github.io.
EMGBench: Benchmarking Out-of-Distribution Generalization and Adaptation for Electromyography
[ "Jehan Yang", "Maxwell J. Soh", "Vivianna Lieu", "Douglas J Weber", "Zackory Erickson" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.23625
[ "https://github.com/jehanyang/emgbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=sAxVIWQOzo
@inproceedings{ nasir2024gametraversalbenchmark, title={GameTraversalBenchmark: Evaluating Planning Abilities Of Large Language Models Through Traversing 2D Game Maps}, author={Muhammad Umair Nasir and Steven James and Julian Togelius}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=sAxVIWQOzo} }
Large language models (LLMs) have recently demonstrated great success in generating and understanding natural language. While they have also shown potential beyond the domain of natural language, it remains an open question as to what extent and in which way these LLMs can plan. We investigate their planning capabilities by proposing \texttt{GameTraversalBenchmark (GTB)}, a benchmark consisting of diverse 2D grid-based game maps. An LLM succeeds if it can traverse through given objectives, with a minimum number of steps and a minimum number of generation errors. We evaluate a number of LLMs on \texttt{GTB} and found that GPT-4-Turbo achieved the highest score of $44.97\%$ on \texttt{GTB\_Score} (GTBS), a composite score that combines the three above criteria. Furthermore, we preliminarily test large reasoning models, namely o1, which scores $67.84\%$ on GTBS, indicating that the benchmark remains challenging for current models. Code, data, and documentation are available at \url{https://github.com/umair-nasir14/Game-Traversal-Benchmark}.
GameTraversalBenchmark: Evaluating Planning Abilities Of Large Language Models Through Traversing 2D Game Maps
[ "Muhammad Umair Nasir", "Steven James", "Julian Togelius" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.07765
[ "https://github.com/umair-nasir14/game-traversal-benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=s8h2jSN6a6
@inproceedings{ liu2024mmdu, title={{MMDU}: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for {LVLM}s}, author={Ziyu Liu and Tao Chu and Yuhang Zang and Xilin Wei and Xiaoyi Dong and Pan Zhang and Zijian Liang and Yuanjun Xiong and Yu Qiao and Dahua Lin and Jiaqi Wang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=s8h2jSN6a6} }
Generating natural and meaningful responses to communicate with multi-modal human inputs is a fundamental capability of Large Vision-Language Models (LVLMs). While current open-source LVLMs demonstrate promising performance in simplified scenarios such as single-turn single-image input, they fall short in real-world conversation scenarios such as following instructions in a long context history with multi-turn and multi-images. Existing LVLM benchmarks primarily focus on single-choice questions or short-form responses, which do not adequately assess the capabilities of LVLMs in real-world human-AI interaction applications. Therefore, we introduce MMDU, a comprehensive benchmark, and MMDU-45k, a large-scale instruction tuning dataset, designed to evaluate and improve LVLMs' abilities in multi-turn and multi-image conversations. We employ the clustering algorithm to find the relevant images and textual descriptions from the open-source Wikipedia and construct the question-answer pairs by human annotators with the assistance of the GPT-4o model. MMDU has a maximum of 18k image+text tokens, 20 images, and 27 turns, which is at least 5x longer than previous benchmarks and poses challenges to current LVLMs. Our in-depth analysis of 15 representative LVLMs using MMDU reveals that open-source LVLMs lag behind closed-source counterparts due to limited conversational instruction tuning data. We demonstrate that fine-tuning open-source LVLMs on MMDU-45k significantly address this gap, generating longer and more accurate conversations, and improving scores on MMDU and existing benchmarks (MMStar: +1.1%, MathVista: +1.5%, ChartQA: +1.2%). Our contributions pave the way for bridging the gap between current LVLM models and real-world application demands. The links to MMDU, and MMDU-45k are available in the supplementary material.
MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs
[ "Ziyu Liu", "Tao Chu", "Yuhang Zang", "Xilin Wei", "Xiaoyi Dong", "Pan Zhang", "Zijian Liang", "Yuanjun Xiong", "Yu Qiao", "Dahua Lin", "Jiaqi Wang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.11833
[ "https://github.com/liuziyu77/mmdu" ]
https://huggingface.co/papers/2406.11833
6
61
4
11
[]
[ "laolao77/MMDU" ]
[]
[]
[ "laolao77/MMDU" ]
[]
1
null
https://openreview.net/forum?id=s1K5Z5QPog
@inproceedings{ nathaniel2024chaosbench, title={ChaosBench: A Multi-Channel, Physics-Based Benchmark for Subseasonal-to-Seasonal Climate Prediction}, author={Juan Nathaniel and Yongquan Qu and Tung Nguyen and Sungduk Yu and Julius Busecke and Aditya Grover and Pierre Gentine}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=s1K5Z5QPog} }
Accurate prediction of climate in the subseasonal-to-seasonal scale is crucial for disaster preparedness and robust decision making amidst climate change. Yet, forecasting beyond the weather timescale is challenging because it deals with problems other than initial condition, including boundary interaction, butterfly effect, and our inherent lack of physical understanding. At present, existing benchmarks tend to have shorter forecasting range of up-to 15 days, do not include a wide range of operational baselines, and lack physics-based constraints for explainability. Thus, we propose ChaosBench, a challenging benchmark to extend the predictability range of data-driven weather emulators to S2S timescale. First, ChaosBench is comprised of variables beyond the typical surface-atmospheric ERA5 to also include ocean, ice, and land reanalysis products that span over 45 years to allow for full Earth system emulation that respects boundary conditions. We also propose physics-based, in addition to deterministic and probabilistic metrics, to ensure a physically-consistent ensemble that accounts for butterfly effect. Furthermore, we evaluate on a diverse set of physics-based forecasts from four national weather agencies as baselines to our data-driven counterpart such as ViT/ClimaX, PanguWeather, GraphCast, and FourCastNetV2. Overall, we find methods originally developed for weather-scale applications fail on S2S task: their performance simply collapse to an unskilled climatology. Nonetheless, we outline and demonstrate several strategies that can extend the predictability range of existing weather emulators, including the use of ensembles, robust control of error propagation, and the use of physics-informed models. Our benchmark, datasets, and instructions are available at https://leap-stc.github.io/ChaosBench.
ChaosBench: A Multi-Channel, Physics-Based Benchmark for Subseasonal-to-Seasonal Climate Prediction
[ "Juan Nathaniel", "Yongquan Qu", "Tung Nguyen", "Sungduk Yu", "Julius Busecke", "Aditya Grover", "Pierre Gentine" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2402.00712
[ "https://github.com/leap-stc/ChaosBench" ]
https://huggingface.co/papers/2402.00712
1
1
0
7
[]
[ "LEAP/ChaosBench" ]
[]
[]
[ "LEAP/ChaosBench" ]
[]
1
null
https://openreview.net/forum?id=rovpCs3ZEO
@inproceedings{ wang2024fedmeki, title={{FEDMEKI}: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge Injection}, author={Jiaqi Wang and Xiaochen Wang and Lingjuan Lyu and Jinghui Chen and Fenglong Ma}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=rovpCs3ZEO} }
This study introduces the Federated Medical Knowledge Injection (FedMEKI) platform, a new benchmark designed to address the unique challenges of integrating medical knowledge into foundation models under privacy constraints. By leveraging a cross-silo federated learning approach, FedMEKI circumvents the issues associated with centralized data collection, which is often prohibited under health regulations like the Health Insurance Portability and Accountability Act (HIPAA) in the USA. The platform is meticulously designed to handle multi-site, multi-modal, and multi-task medical data, which includes 7 medical modalities, including images, signals, texts, laboratory test results, vital signs, input variables, and output variables. The curated dataset to validate FedMEKI covers 8 medical tasks, including 6 classification tasks (lung opacity detection, COVID-19 detection, electrocardiogram (ECG) abnormal detection, mortality prediction, sepsis protection, and enlarged cardiomediastinum detection) and 2 generation tasks (medical visual question answering (MedVQA) and ECG noise clarification). This comprehensive dataset is partitioned across several clients to facilitate the decentralized training process under 16 benchmark approaches. FedMEKI not only preserves data privacy but also enhances the capability of medical foundation models by allowing them to learn from a broader spectrum of medical knowledge without direct data exposure, thereby setting a new benchmark in the application of foundation models within the healthcare sector.
FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge Injection
[ "Jiaqi Wang", "Xiaochen Wang", "Lingjuan Lyu", "Jinghui Chen", "Fenglong Ma" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2408.09227
[ "https://github.com/psudslab/FEDMEKI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rfbSL1qXN3
@inproceedings{ hu2024ssdm, title={{SS}3{DM}: Benchmarking Street-View Surface Reconstruction with a Synthetic 3D Mesh Dataset}, author={Yubin Hu and Kairui Wen and Heng Zhou and Xiaoyang Guo and Yong-jin Liu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=rfbSL1qXN3} }
Reconstructing accurate 3D surfaces for street-view scenarios is crucial for applications such as digital entertainment and autonomous driving simulation. However, existing street-view datasets, including KITTI, Waymo, and nuScenes, only offer noisy LiDAR points as ground-truth data for geometric evaluation of reconstructed surfaces. These geometric ground-truths often lack the necessary precision to evaluate surface positions and do not provide data for assessing surface normals. To overcome these challenges, we introduce the SS3DM dataset, comprising precise \textbf{S}ynthetic \textbf{S}treet-view \textbf{3D} \textbf{M}esh models exported from the CARLA simulator. These mesh models facilitate accurate position evaluation and include normal vectors for evaluating surface normal. To simulate the input data in realistic driving scenarios for 3D reconstruction, we virtually drive a vehicle equipped with six RGB cameras and five LiDAR sensors in diverse outdoor scenes. Leveraging this dataset, we establish a benchmark for state-of-the-art surface reconstruction methods, providing a comprehensive evaluation of the associated challenges. For more information, visit our homepage at https://ss3dm.top.
SS3DM: Benchmarking Street-View Surface Reconstruction with a Synthetic 3D Mesh Dataset
[ "Yubin Hu", "Kairui Wen", "Heng Zhou", "Xiaoyang Guo", "Yong-jin Liu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.21739
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=raOYixthlY
@inproceedings{ liu2024flexmol, title={FlexMol: A Flexible Toolkit for Benchmarking Molecular Relational Learning}, author={Sizhe Liu and Jun Xia and Lecheng Zhang and Yuchen Liu and Yue Liu and Wenjie Du and Zhangyang Gao and Bozhen Hu and Cheng Tan and hongxin xiang and Stan Z. Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=raOYixthlY} }
Molecular relational learning (MRL) is crucial for understanding the interaction behaviors between molecular pairs, a critical aspect of drug discovery and development. However, the large feasible model space of MRL poses significant challenges to benchmarking, and existing MRL frameworks face limitations in flexibility and scope. To address these challenges, avoid repetitive coding efforts, and ensure fair comparison of models, we introduce FlexMol, a comprehensive toolkit designed to facilitate the construction and evaluation of diverse model architectures across various datasets and performance metrics. FlexMol offers a robust suite of preset model components, including 16 drug encoders, 13 protein sequence encoders, 9 protein structure encoders, and 7 interaction layers. With its easy-to-use API and flexibility, FlexMol supports the dynamic construction of over 70, 000 distinct combinations of model architectures. Additionally, we provide detailed benchmark results and code examples to demonstrate FlexMol’s effectiveness in simplifying and standardizing MRL model development and comparison. FlexMol is open-sourced and available at https://github.com/Steven51516/FlexMol.
FlexMol: A Flexible Toolkit for Benchmarking Molecular Relational Learning
[ "Sizhe Liu", "Jun Xia", "Lecheng Zhang", "Yuchen Liu", "Yue Liu", "Wenjie Du", "Zhangyang Gao", "Bozhen Hu", "Cheng Tan", "hongxin xiang", "Stan Z. Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.15010
[ "https://github.com/Steven51516/FlexMol" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rZlLfa81D8
@inproceedings{ wornow2024wonderbread, title={{WONDERBREAD}: A Benchmark for Evaluating Multimodal Foundation Models on Business Process Management Tasks}, author={Michael Wornow and Avanika Narayan and Benjamin Viggiano and Ishan S. Khare and Tathagat Verma and Tibor Thompson and Miguel Angel Fuentes Hernandez and Sudharsan Sundar and Chloe Trujillo and Krrish Chawla and Rongfei Lu and Justin Shen and Divya Nagaraj and Joshua Martinez and Vardhan Kishore Agrawal and Althea Hudson and Nigam Shah and Christopher Re}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=rZlLfa81D8} }
Existing ML benchmarks lack the depth and diversity of annotations needed for evaluating models on business process management (BPM) tasks. BPM is the practice of documenting, measuring, improving, and automating enterprise workflows. However, research has focused almost exclusively on one task -- full end-to-end automation using agents based on multimodal foundation models (FMs) like GPT-4. This focus on automation ignores the reality of how most BPM tools are applied today -- simply documenting the relevant workflow takes 60% of the time of the typical process optimization project. To address this gap we present WONDERBREAD, the first benchmark for evaluating multimodal FMs on BPM tasks beyond automation. Our contributions are: (1) a dataset containing 2928 documented workflow demonstrations; (2) 6 novel BPM tasks sourced from real-world applications ranging from workflow documentation to knowledge transfer to process improvement; and (3) an automated evaluation harness. Our benchmark shows that while state-of-the-art FMs can automatically generate documentation (e.g. recalling 88% of the steps taken in a video demonstration of a workflow), they struggle to re-apply that knowledge towards finer-grained validation of workflow completion (F1 < 0.3). We hope WONDERBREAD encourages the development of more "human-centered" AI tooling for enterprise applications and furthers the exploration of multimodal FMs for the broader universe of BPM tasks. We publish our dataset and experiments here: https://github.com/HazyResearch/wonderbread
WONDERBREAD: A Benchmark for Evaluating Multimodal Foundation Models on Business Process Management Tasks
[ "Michael Wornow", "Avanika Narayan", "Benjamin Viggiano", "Ishan S. Khare", "Tathagat Verma", "Tibor Thompson", "Miguel Angel Fuentes Hernandez", "Sudharsan Sundar", "Chloe Trujillo", "Krrish Chawla", "Rongfei Lu", "Justin Shen", "Divya Nagaraj", "Joshua Martinez", "Vardhan Kishore Agrawal", "Althea Hudson", "Nigam Shah", "Christopher Re" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.13264
[ "https://github.com/hazyresearch/wonderbread" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rIHx6puY5b
@inproceedings{ feuer2024select, title={{SELECT}: A Large-Scale Benchmark of Data Curation Strategies for Image Classification}, author={Benjamin Feuer and Jiawei Xu and Niv Cohen and Patrick Yubeaton and Govind Mittal and Chinmay Hegde}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=rIHx6puY5b} }
Data curation is the problem of how to collect and organize samples into a dataset that supports efficient learning. Despite the centrality of the task, little work has been devoted towards a large-scale, systematic comparison of various curation methods. In this work, we take steps towards a formal evaluation of data curation strategies and introduce SELECT, the first large-scale benchmark of curation strategies for image classification. In order to generate baseline methods for the SELECT benchmark, we create a new dataset, ImageNet++, which constitutes the largest superset of ImageNet-1K to date. Our dataset extends ImageNet with 5 new training-data shifts, each approximately the size of ImageNet-1K, and each assembled using a distinct curation strategy. We evaluate our data curation baselines in two ways: (i) using each training-data shift to train identical image classification models from scratch (ii) using it to inspect a fixed pretrained self-supervised representation. Our findings show interesting trends, particularly pertaining to recent methods for data curation such as synthetic data generation and lookup based on CLIP embeddings. We show that although these strategies are highly competitive for certain tasks, the curation strategy used to assemble the original ImageNet-1K dataset remains the gold standard. We anticipate that our benchmark can illuminate the path for new methods to further reduce the gap. We release our checkpoints, code, documentation, and a link to our dataset at https://github.com/jimmyxu123/SELECT.
SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification
[ "Benjamin Feuer", "Jiawei Xu", "Niv Cohen", "Patrick Yubeaton", "Govind Mittal", "Chinmay Hegde" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.05057
[ "https://github.com/jimmyxu123/select" ]
https://huggingface.co/papers/2410.05057
1
7
2
6
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=rI7kbFTSpr
@inproceedings{ hu2024towards, title={Towards Reliable Model Selection for Unsupervised Domain Adaptation: An Empirical Study and A Certified Baseline}, author={Dapeng Hu and Mi Luo and Jian Liang and Chuan-Sheng Foo}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=rI7kbFTSpr} }
Selecting appropriate hyperparameters is crucial for unlocking the full potential of advanced unsupervised domain adaptation (UDA) methods in unlabeled target domains. Although this challenge remains under-explored, it has recently garnered increasing attention with the proposals of various model selection methods. Reliable model selection should maintain performance across diverse UDA methods and scenarios, especially avoiding highly risky worst-case selections—selecting the model or hyperparameter with the worst performance in the pool. \textit{Are existing model selection methods reliable and versatile enough for different UDA tasks?} In this paper, we provide a comprehensive empirical study involving 8 existing model selection approaches to answer this question. Our evaluation spans 12 UDA methods across 5 diverse UDA benchmarks and 5 popular UDA scenarios. Surprisingly, we find that none of these approaches can effectively avoid the worst-case selection. In contrast, a simple but overlooked ensemble-based selection approach, which we call EnsV, is both theoretically and empirically certified to avoid the worst-case selection, ensuring high reliability. Additionally, EnsV is versatile for various practical but challenging UDA scenarios, including validation of open-partial-set UDA and source-free UDA. Finally, we call for more attention to the reliability of model selection in UDA: avoiding the worst-case is as significant as achieving peak selection performance and should not be overlooked when developing new model selection methods. Code is available at https://github.com/LHXXHB/EnsV.
Towards Reliable Model Selection for Unsupervised Domain Adaptation: An Empirical Study and A Certified Baseline
[ "Dapeng Hu", "Mi Luo", "Jian Liang", "Chuan-Sheng Foo" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=rGdy9jrBs8
@inproceedings{ ye2024uavd, title={{UAV}3D: A Large-scale 3D Perception Benchmark for Unmanned Aerial Vehicles}, author={Hui Ye and Rajshekhar Sunderraman and Shihao Ji}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=rGdy9jrBs8} }
Unmanned Aerial Vehicles (UAVs), equipped with cameras, are employed in numerous applications, including aerial photography, surveillance, and agriculture. In these applications, robust object detection and tracking are essential for the effective deployment of UAVs. However, existing benchmarks for UAV applications are mainly designed for traditional 2D perception tasks, restricting the development of real-world applications that require a 3D understanding of the environment. Furthermore, despite recent advancements in single-UAV perception, limited views of a single UAV platform significantly constrain its perception capabilities over long distances or in occluded areas. To address these challenges, we introduce UAV3D – a benchmark designed to advance research in both 3D and collaborative 3D perception tasks with UAVs. UAV3D comprises 1,000 scenes, each of which has 20 frames with fully annotated 3D bounding boxes on vehicles. We provide the benchmark for four 3D perception tasks: single-UAV 3D object detection, single-UAV object tracking, collaborative-UAV 3D object detection, and collaborative-UAV object tracking. Our dataset and code are available at https://huiyegit.github.io/UAV3D_Benchmark/.
UAV3D: A Large-scale 3D Perception Benchmark for Unmanned Aerial Vehicles
[ "Hui Ye", "Rajshekhar Sunderraman", "Shihao Ji" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.11125
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qwWu95yoZO
@inproceedings{ panchal2024live, title={Live Fitness Coaching as a Testbed for Situated Interaction}, author={Sunny Panchal and Apratim Bhattacharyya and Guillaume Berger and Antoine Mercier and Cornelius B{\"o}hm and Florian Dietrichkeit and Reza Pourreza and Xuanlin Li and Pulkit Madan and Mingu Lee and Mark Todorovich and Ingo Bax and Roland Memisevic}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qwWu95yoZO} }
Vision-language models have shown impressive progress in recent years. However, existing models are largely limited to turn-based interactions, where each turn must be stepped (i.e., prompted) by the user. Open-ended, asynchronous interactions, where an AI model may proactively deliver timely responses or feedback based on the unfolding situation in real-time, are an open challenge. In this work, we present the QEVD benchmark and dataset, which explores human-AI interaction in the challenging, yet controlled, real-world domain of fitness coaching – a task which intrinsically requires monitoring live user activity and providing immediate feedback. The benchmark requires vision-language models to recognize complex human actions, identify possible mistakes, and provide appropriate feedback in real-time. Our experiments reveal the limitations of existing state-of-the-art vision-language models for such asynchronous situated interactions. Motivated by this, we propose a simple end-to-end streaming baseline that can respond asynchronously to human actions with appropriate feedback at the appropriate time.
Live Fitness Coaching as a Testbed for Situated Interaction
[ "Sunny Panchal", "Apratim Bhattacharyya", "Guillaume Berger", "Antoine Mercier", "Cornelius Böhm", "Florian Dietrichkeit", "Reza Pourreza", "Xuanlin Li", "Pulkit Madan", "Mingu Lee", "Mark Todorovich", "Ingo Bax", "Roland Memisevic" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.08101
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qrZxL3Bto9
@inproceedings{ cruz2024evaluating, title={Evaluating language models as risk scores}, author={Andr{\'e} F Cruz and Moritz Hardt and Celestine Mendler-D{\"u}nner}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qrZxL3Bto9} }
Current question-answering benchmarks predominantly focus on accuracy in realizable prediction tasks. Conditioned on a question and answer-key, does the most likely token match the ground truth? Such benchmarks necessarily fail to evaluate LLMs' ability to quantify ground-truth outcome uncertainty. In this work, we focus on the use of LLMs as risk scores for unrealizable prediction tasks. We introduce folktexts, a software package to systematically generate risk scores using LLMs, and evaluate them against US Census data products. A flexible API enables the use of different prompting schemes, local or web-hosted models, and diverse census columns that can be used to compose custom prediction tasks. We evaluate 17 recent LLMs across five proposed benchmark tasks. We find that zero-shot risk scores produced by multiple-choice question-answering have high predictive signal but are widely miscalibrated. Base models consistently overestimate outcome uncertainty, while instruction-tuned models underestimate uncertainty and produce over-confident risk scores. In fact, instruction-tuning polarizes answer distribution regardless of true underlying data uncertainty. This reveals a general inability of instruction-tuned models to express data uncertainty using multiple-choice answers. A separate experiment using verbalized chat-style risk queries yields substantially improved calibration across instruction-tuned models. These differences in ability to quantify data uncertainty cannot be revealed in realizable settings, and highlight a blind-spot in the current evaluation ecosystem that folktexts covers.
Evaluating language models as risk scores
[ "André F Cruz", "Moritz Hardt", "Celestine Mendler-Dünner" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.14614
[ "https://github.com/socialfoundations/folktexts" ]
https://huggingface.co/papers/2407.14614
1
0
0
3
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=qqU8WPw44f
@inproceedings{ chen2024curerec, title={{CURE}4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence}, author={Chaochao Chen and Jiaming Zhang and Yizhao Zhang and Li Zhang and Lingjuan Lyu and Yuyuan Li and Biao Gong and Chenggang Yan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qqU8WPw44f} }
With increasing privacy concerns in artificial intelligence, regulations have mandated the right to be forgotten, granting individuals the right to withdraw their data from models. Machine unlearning has emerged as a potential solution to enable selective forgetting in models, particularly in recommender systems where historical data contains sensitive user information. Despite recent advances in recommendation unlearning, evaluating unlearning methods comprehensively remains challenging due to the absence of a unified evaluation framework and overlooked aspects of deeper influence, e.g., fairness. To address these gaps, we propose CURE4Rec, the first comprehensive benchmark for recommendation unlearning evaluation. CURE4Rec covers four aspects, i.e., unlearning Completeness, recommendation Utility, unleaRning efficiency, and recommendation fairnEss, under three data selection strategies, i.e., core data, edge data, and random data. Specifically, we consider the deeper influence of unlearning on recommendation fairness and robustness towards data with varying impact levels. We construct multiple datasets with CURE4Rec evaluation and conduct extensive experiments on existing recommendation unlearning methods. Our code is released at https://github.com/xiye7lai/CURE4Rec.
CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence
[ "Chaochao Chen", "Jiaming Zhang", "Yizhao Zhang", "Li Zhang", "Lingjuan Lyu", "Yuyuan Li", "Biao Gong", "Chenggang Yan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2408.14393
[ "https://github.com/xiye7lai/cure4rec" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qmvtDIfbmS
@inproceedings{ xie2024whodunitbench, title={WhodunitBench: Evaluating Large Multimodal Agents via Murder Mystery Games}, author={Junlin Xie and Ruifei Zhang and Zhihong Chen and Xiang Wan and Guanbin Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qmvtDIfbmS} }
Recently, large language models (LLMs) have achieved superior performance, empowering the development of large multimodal agents (LMAs). An LMA is anticipated to execute practical tasks requires various capabilities including multimodal perception, interaction, reasoning, and decision making. However, existing benchmarks are limited in assessing compositional skills and actions demanded by practical scenarios, where they primarily focused on single tasks and static scenarios. To bridge this gap, we introduce WhodunitBench, a benchmark rooted from murder mystery games, where players are required to utilize the aforementioned skills to achieve their objective (i.e., identifying the `murderer' or hiding themselves), providing a simulated dynamic environment for evaluating LMAs. Specifically, WhodunitBench includes two evaluation modes. The first mode, the arena-style evaluation, is constructed from 50 meticulously curated scripts featuring clear reasoning clues and distinct murderers; The second mode, the chain of evaluation, consists of over 3000 curated multiple-choice questions and open-ended questions, aiming to assess every facet of the murder mystery games for LMAs. Experiments show that although current LMAs show acceptable performance in basic perceptual tasks, they are insufficiently equipped for complex multi-agent collaboration and multi-step reasoning tasks. Furthermore, the full application of the theory of mind to complete games in a manner akin to human behavior remains a significant challenge. We hope this work can illuminate the path forward, providing a solid foundation for the future development of LMAs. Our WhodunitBench is open-source and accessible at: https://github.com/ jun0wanan/WhodunitBench-Murder_Mystery_Games
WhodunitBench: Evaluating Large Multimodal Agents via Murder Mystery Games
[ "Junlin Xie", "Ruifei Zhang", "Zhihong Chen", "Xiang Wan", "Guanbin Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qgzdGyQcDt
@inproceedings{ singh2024eevr, title={{EEVR}: A Dataset of Paired Physiological Signals and Textual Descriptions for Joint Emotion Representation Learning}, author={Pragya Singh and Ritvik Budhiraja and Ankush Gupta and Anshul Goswami and Mohan Kumar and Pushpendra Singh}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qgzdGyQcDt} }
EEVR (Emotion Elicitation in Virtual Reality) is a novel dataset specifically designed for language supervision-based pre-training of emotion recognition tasks, such as valence and arousal classification. It features high-quality physiological signals, including electrodermal activity (EDA) and photoplethysmography (PPG), acquired through emotion elicitation via 360-degree virtual reality (VR) videos. Additionally, it includes subject-wise textual descriptions of emotions experienced during each stimulus gathered from qualitative interviews. The dataset consists of recordings from 37 participants and is the first dataset to pair raw text with physiological signals, providing additional contextual information that objective labels cannot offer. To leverage this dataset, we introduced the Contrastive Language Signal Pre-training (CLSP) method, which jointly learns representations using pairs of physiological signals and textual descriptions. Our results show that integrating self-reported textual descriptions with physiological signals significantly improves performance on emotion recognition tasks, such as arousal and valence classification. Moreover, our pre-trained CLSP model demonstrates strong zero-shot transferability to existing datasets, outperforming supervised baseline models, suggesting that the representations learned by our method are more contextualized and generalized. The dataset also includes baseline models for arousal, valence, and emotion classification, as well as code for data cleaning and feature extraction. Further details and access to the dataset are available at https://melangelabiiitd.github.io/EEVR/.
EEVR: A Dataset of Paired Physiological Signals and Textual Descriptions for Joint Emotion Representation Learning
[ "Pragya Singh", "Ritvik Budhiraja", "Ankush Gupta", "Anshul Goswami", "Mohan Kumar", "Pushpendra Singh" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qeLh17biCr
@inproceedings{ zhang2024task, title={Task Me Anything}, author={Jieyu Zhang and Weikai Huang and Zixian Ma and Oscar Michel and Dong He and Tanmay Gupta and Wei-Chiu Ma and Ali Farhadi and Aniruddha Kembhavi and Ranjay Krishna}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qeLh17biCr} }
Benchmarks for large multimodal language models (MLMs) now serve to simultaneously assess the general capabilities of models instead of evaluating for a specific capability. As a result, when a developer wants to identify which models to use for their application, they are overwhelmed by the number of benchmarks and remain uncertain about which benchmark's results are most reflective of their specific use case. This paper introduces Task-Me-Anything, a benchmark generation engine which produces a benchmark tailored to a user's needs. Task-Me-Anything maintains an extendable taxonomy of visual assets and can programmatically generate a vast number of task instances. Additionally, it algorithmically addresses user queries regarding MLM performance efficiently within a computational budget. It contains 113K images, 10K videos, 2K 3D object assets, over 365 object categories, 655 attributes, and 335 relationships. It can generate 500M image/video question-answering pairs, which focus on evaluating MLM perceptual capabilities. Task-Me-Anything reveals critical insights: open-source MLMs excel in object and attribute recognition but lack spatial and temporal understanding; each model exhibits unique strengths and weaknesses; larger models generally perform better, though exceptions exist; and GPT4O demonstrates challenges in recognizing rotating/moving objects and distinguishing colors.
Task Me Anything
[ "Jieyu Zhang", "Weikai Huang", "Zixian Ma", "Oscar Michel", "Dong He", "Tanmay Gupta", "Wei-Chiu Ma", "Ali Farhadi", "Aniruddha Kembhavi", "Ranjay Krishna" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.11775
[ "https://github.com/jieyuz2/taskmeanything" ]
https://huggingface.co/papers/2406.11775
1
8
1
10
[]
[ "weikaih/TaskMeAnything-v1-imageqa-random", "weikaih/TaskMeAnything-v1-videoqa-random", "weikaih/TaskMeAnything-v1-imageqa-2024", "weikaih/TaskMeAnything-v1-videoqa-2024" ]
[ "Demo750/XGBoost_Gaze" ]
[]
[ "weikaih/TaskMeAnything-v1-imageqa-random", "weikaih/TaskMeAnything-v1-videoqa-random", "weikaih/TaskMeAnything-v1-imageqa-2024", "weikaih/TaskMeAnything-v1-videoqa-2024" ]
[ "Demo750/XGBoost_Gaze" ]
1
null
https://openreview.net/forum?id=qXvepIzFL5
@inproceedings{ luo2024mmmrs, title={{MMM}-{RS}: A Multi-modal, Multi-{GSD}, Multi-scene Remote Sensing Dataset and Benchmark for Text-to-Image Generation}, author={Jialin Luo and Yuanzhi Wang and Ziqi Gu and Yide Qiu and Shuaizhen Yao and Fuyun Wang and Chunyan Xu and Wenhua Zhang and Dan Wang and Zhen Cui}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qXvepIzFL5} }
Recently, the diffusion-based generative paradigm has achieved impressive general image generation capabilities with text prompts due to its accurate distribution modeling and stable training process. However, generating diverse remote sensing (RS) images that are tremendously different from general images in terms of scale and perspective remains a formidable challenge due to the lack of a comprehensive remote sensing image generation dataset with various modalities, ground sample distances (GSD), and scenes. In this paper, we propose a Multi-modal, Multi-GSD, Multi-scene Remote Sensing (MMM-RS) dataset and benchmark for text-to-image generation in diverse remote sensing scenarios. Specifically, we first collect nine publicly available RS datasets and conduct standardization for all samples. To bridge RS images to textual semantic information, we utilize a large-scale pretrained vision-language model to automatically output text prompts and perform hand-crafted rectification, resulting in information-rich text-image pairs (including multi-modal images). In particular, we design some methods to obtain the images with different GSD and various environments (e.g., low-light, foggy) in a single sample. With extensive manual screening and refining annotations, we ultimately obtain a MMM-RS dataset that comprises approximately 2.1 million text-image pairs. Extensive experimental results verify that our proposed MMM-RS dataset allows off-the-shelf diffusion models to generate diverse RS images across various modalities, scenes, weather conditions, and GSD. The dataset is available at https://github.com/ljl5261/MMM-RS.
MMM-RS: A Multi-modal, Multi-GSD, Multi-scene Remote Sensing Dataset and Benchmark for Text-to-Image Generation
[ "Jialin Luo", "Yuanzhi Wang", "Ziqi Gu", "Yide Qiu", "Shuaizhen Yao", "Fuyun Wang", "Chunyan Xu", "Wenhua Zhang", "Dan Wang", "Zhen Cui" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=qWTfCO4HvT
@inproceedings{ varbella2024powergraph, title={PowerGraph: A power grid benchmark dataset for graph neural networks}, author={Anna Varbella and Kenza Amara and Blazhe Gjorgiev and Mennatallah El-Assady and Giovanni Sansavini}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=qWTfCO4HvT} }
Power grids are critical infrastructures of paramount importance to modern society and, therefore, engineered to operate under diverse conditions and failures. The ongoing energy transition poses new challenges for the decision-makers and system operators. Therefore, we must develop grid analysis algorithms to ensure reliable operations. These key tools include power flow analysis and system security analysis, both needed for effective operational and strategic planning. The literature review shows a growing trend of machine learning (ML) models that perform these analyses effectively. In particular, Graph Neural Networks (GNNs) stand out in such applications because of the graph-based structure of power grids. However, there is a lack of publicly available graph datasets for training and benchmarking ML models in electrical power grid applications. First, we present PowerGraph, which comprises GNN-tailored datasets for i) power flows, ii) optimal power flows, and iii) cascading failure analyses of power grids. Second, we provide ground-truth explanations for the cascading failure analysis. Finally, we perform a complete benchmarking of GNN methods for node-level and graph-level tasks and explainability. Overall, PowerGraph is a multifaceted GNN dataset for diverse tasks that includes power flow and fault scenarios with real-world explanations, providing a valuable resource for developing improved GNN models for node-level, graph-level tasks and explainability methods in power system modeling. The dataset is available at https://figshare.com/articles/dataset/PowerGraph/22820534 and the code at https://github.com/PowerGraph-Datasets.
PowerGraph: A power grid benchmark dataset for graph neural networks
[ "Anna Varbella", "Kenza Amara", "Blazhe Gjorgiev", "Mennatallah El-Assady", "Giovanni Sansavini" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.02827
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
15
Edit dataset card