bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
302
2.02k
abstract
stringlengths
566
2.48k
title
stringlengths
16
179
authors
sequencelengths
1
76
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
24
upvotes
int64
-1
86
num_comments
int64
-1
10
n_authors
int64
-1
75
Models
sequencelengths
0
37
Datasets
sequencelengths
0
10
Spaces
sequencelengths
0
26
old_Models
sequencelengths
0
37
old_Datasets
sequencelengths
0
10
old_Spaces
sequencelengths
0
26
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=AAo8zAShX3
@inproceedings{ bushuiev2024massspecgym, title={MassSpecGym: A benchmark for the discovery and identification of molecules}, author={Roman Bushuiev and Anton Bushuiev and Niek F. de Jonge and Adamo Young and Fleming Kretschmer and Raman Samusevich and Janne Heirman and Fei Wang and Luke Zhang and Kai D{\"u}hrkop and Marcus Ludwig and Nils A. Haupt and Apurva Kalia and Corinna Brungs and Robin Schmid and Russell Greiner and BO WANG and David Wishart and Liping Liu and Juho Rousu and Wout Bittremieux and Hannes Rost and Tytus D. Mak and Soha Hassoun and Florian Huber and Justin J.J. van der Hooft and Michael A. Stravs and Sebastian B{\"o}cker and Josef Sivic and Tomas Pluskal}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=AAo8zAShX3} }
The discovery and identification of molecules in biological and environmental samples is crucial for advancing biomedical and chemical sciences. Tandem mass spectrometry (MS/MS) is the leading technique for high-throughput elucidation of molecular structures. However, decoding a molecular structure from its mass spectrum is exceptionally challenging, even when performed by human experts. As a result, the vast majority of acquired MS/MS spectra remain uninterpreted, thereby limiting our understanding of the underlying (bio)chemical processes. Despite decades of progress in machine learning applications for predicting molecular structures from MS/MS spectra, the development of new methods is severely hindered by the lack of standard datasets and evaluation protocols. To address this problem, we propose MassSpecGym -- the first comprehensive benchmark for the discovery and identification of molecules from MS/MS data. Our benchmark comprises the largest publicly available collection of high-quality MS/MS spectra and defines three MS/MS annotation challenges: \textit{de novo} molecular structure generation, molecule retrieval, and spectrum simulation. It includes new evaluation metrics and a generalization-demanding data split, therefore standardizing the MS/MS annotation tasks and rendering the problem accessible to the broad machine learning community. MassSpecGym is publicly available at \url{https://github.com/pluskal-lab/MassSpecGym}.
MassSpecGym: A benchmark for the discovery and identification of molecules
[ "Roman Bushuiev", "Anton Bushuiev", "Niek F. de Jonge", "Adamo Young", "Fleming Kretschmer", "Raman Samusevich", "Janne Heirman", "Fei Wang", "Luke Zhang", "Kai Dührkop", "Marcus Ludwig", "Nils A. Haupt", "Apurva Kalia", "Corinna Brungs", "Robin Schmid", "Russell Greiner", "BO WANG", "David Wishart", "Liping Liu", "Juho Rousu", "Wout Bittremieux", "Hannes Rost", "Tytus D. Mak", "Soha Hassoun", "Florian Huber", "Justin J.J. van der Hooft", "Michael A. Stravs", "Sebastian Böcker", "Josef Sivic", "Tomas Pluskal" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2410.23326
[ "https://github.com/pluskal-lab/massspecgym" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=A33u66KmYf
@inproceedings{ gharaee2024bioscanm, title={{BIOSCAN}-5M: A Multimodal Dataset for Insect Biodiversity}, author={Zahra Gharaee and Scott C Lowe and ZeMing Gong and Pablo Andres Millan Arias and Nicholas Pellegrino and Austin Wang and Joakim Bruslund Haurum and Iuliia Zarubiieva and Lila Kari and Dirk Steinke and Graham W. Taylor and Paul W. Fieguth and Angel X Chang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=A33u66KmYf} }
As part of an ongoing worldwide effort to comprehend and monitor insect biodiversity, this paper presents the BIOSCAN-5M Insect dataset to the machine learning community and establish several benchmark tasks. BIOSCAN-5M is a comprehensive dataset containing multi-modal information for over 5 million insect specimens, and it significantly expands existing image-based biological datasets by including taxonomic labels, raw nucleotide barcode sequences, assigned barcode index numbers, geographical, and size information. We propose three benchmark experiments to demonstrate the impact of the multi-modal data types on the classification and clustering accuracy. First, we pretrain a masked language model on the DNA barcode sequences of the BIOSCAN-5M dataset, and demonstrate the impact of using this large reference library on species- and genus-level classification performance. Second, we propose a zero-shot transfer learning task applied to images and DNA barcodes to cluster feature embeddings obtained from self-supervised learning, to investigate whether meaningful clusters can be derived from these representation embeddings. Third, we benchmark multi-modality by performing contrastive learning on DNA barcodes, image data, and taxonomic information. This yields a general shared embedding space enabling taxonomic classification using multiple types of information and modalities. The code repository of the BIOSCAN-5M Insect dataset is available at https://github.com/bioscan-ml/BIOSCAN-5M.
BIOSCAN-5M: A Multimodal Dataset for Insect Biodiversity
[ "Zahra Gharaee", "Scott C Lowe", "ZeMing Gong", "Pablo Andres Millan Arias", "Nicholas Pellegrino", "Austin Wang", "Joakim Bruslund Haurum", "Iuliia Zarubiieva", "Lila Kari", "Dirk Steinke", "Graham W. Taylor", "Paul W. Fieguth", "Angel X Chang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.12723
[ "https://github.com/zahrag/BIOSCAN-5M" ]
https://huggingface.co/papers/2406.12723
2
0
0
13
[]
[ "Gharaee/BIOSCAN-5M" ]
[]
[]
[ "Gharaee/BIOSCAN-5M" ]
[]
1
null
https://openreview.net/forum?id=9tVn4f8aJO
@inproceedings{ liang2024hemm, title={{HEMM}: Holistic Evaluation of Multimodal Foundation Models}, author={Paul Pu Liang and Akshay Goindani and Talha Chafekar and Leena Mathur and Haofei Yu and Russ Salakhutdinov and Louis-Philippe Morency}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=9tVn4f8aJO} }
Multimodal foundation models that can holistically process text alongside images, video, audio, and other sensory modalities are increasingly used in a variety of real-world applications. However, it is challenging to characterize and study progress in multimodal foundation models, given the range of possible modeling decisions, tasks, and domains. In this paper, we introduce Holistic Evaluation of Multimodal Models (HEMM) to systematically evaluate the capabilities of multimodal foundation models across a set of 3 dimensions: basic skills, information flow, and real-world use cases. Basic multimodal skills are internal abilities required to solve problems, such as learning interactions across modalities, fine-grained alignment, multi-step reasoning, and the ability to handle external knowledge. Information flow studies how multimodal content changes during a task through querying, translation, editing, and fusion. Use cases span domain-specific challenges introduced in real-world multimedia, affective computing, natural sciences, healthcare, and human-computer interaction applications. Through comprehensive experiments across the 30 tasks in HEMM, we (1) identify key dataset dimensions (e.g., basic skills, information flows, and use cases) that pose challenges to today’s models, and (2) distill performance trends regarding how different modeling dimensions (e.g., scale, pre-training data, multimodal alignment, pre-training, and instruction tuning objectives) influence performance. Our conclusions regarding challenging multimodal interactions, use cases, and tasks requiring reasoning and external knowledge, the benefits of data and model scale, and the impacts of instruction-tuning yield actionable insights for future work in multimodal foundation models.
HEMM: Holistic Evaluation of Multimodal Foundation Models
[ "Paul Pu Liang", "Akshay Goindani", "Talha Chafekar", "Leena Mathur", "Haofei Yu", "Russ Salakhutdinov", "Louis-Philippe Morency" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.03418
[ "https://github.com/pliang279/hemm" ]
https://huggingface.co/papers/2407.03418
4
8
1
7
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=9aXjIBLwKc
@inproceedings{ wang2024zsceval, title={{ZSC}-Eval: An Evaluation Toolkit and Benchmark for Multi-agent Zero-shot Coordination}, author={Xihuai Wang and Shao Zhang and Wenhao Zhang and Wentao Dong and Jingxiao Chen and Ying Wen and Weinan Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=9aXjIBLwKc} }
Zero-shot coordination (ZSC) is a new cooperative multi-agent reinforcement learning (MARL) challenge that aims to train an ego agent to work with diverse, unseen partners during deployment. The significant difference between the deployment-time partners' distribution and the training partners' distribution determined by the training algorithm makes ZSC a unique out-of-distribution (OOD) generalization challenge. The potential distribution gap between evaluation and deployment-time partners leads to inadequate evaluation, which is exacerbated by the lack of appropriate evaluation metrics. In this paper, we present **ZSC-Eval**, the first evaluation toolkit and benchmark for ZSC algorithms. ZSC-Eval consists of: 1) Generation of evaluation partner candidates through behavior-preferring rewards to approximate deployment-time partners' distribution; 2) Selection of evaluation partners by Best-Response Diversity (BR-Div); 3) Measurement of generalization performance with various evaluation partners via the Best-Response Proximity (BR-Prox) metric. We use ZSC-Eval to benchmark ZSC algorithms in Overcooked and Google Research Football environments and get novel empirical findings. We also conduct a human experiment of current ZSC algorithms to verify the ZSC-Eval's consistency with human evaluation. ZSC-Eval is now available at https://github.com/sjtu-marl/ZSC-Eval.
ZSC-Eval: An Evaluation Toolkit and Benchmark for Multi-agent Zero-shot Coordination
[ "Xihuai Wang", "Shao Zhang", "Wenhao Zhang", "Wentao Dong", "Jingxiao Chen", "Ying Wen", "Weinan Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2310.05208
[ "https://github.com/HumanCompatibleAI/overcooked_ai" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=9ZDdlgH6O8
@inproceedings{ zhao2024ultraedit, title={UltraEdit: Instruction-based Fine-Grained Image Editing at Scale}, author={Haozhe Zhao and Xiaojian Ma and Liang Chen and Shuzheng Si and Rujie Wu and Kaikai An and Peiyu Yu and Minjia Zhang and Qing Li and Baobao Chang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=9ZDdlgH6O8} }
This paper presents UltraEdit, a large-scale (~ 4M editing samples), automatically generated dataset for instruction-based image editing. Our key idea is to address the drawbacks in existing image editing datasets like InstructPix2Pix and MagicBrush, and provide a *systematic* approach to producing massive and high-quality image editing samples: 1) UltraEdit includes more diverse editing instructions by combining LLM creativity and in-context editing examples by human raters; 2) UltraEdit is anchored on real images (photographs or artworks), which offers more diversity and less biases than those purely synthesized by text-to-image models; 3) UltraEdit supports region-based editing with high-quality, automatically produced region annotations. Our experiments show that canonical diffusion-based editing baselines trained on UltraEdit set new records on challenging MagicBrush and Emu-Edit benchmarks, respectively. Our analysis further confirms the crucial role of real image anchors and region-based editing data. The dataset, code, and models will be made public.
UltraEdit: Instruction-based Fine-Grained Image Editing at Scale
[ "Haozhe Zhao", "Xiaojian Ma", "Liang Chen", "Shuzheng Si", "Rujie Wu", "Kaikai An", "Peiyu Yu", "Minjia Zhang", "Qing Li", "Baobao Chang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.05282
[ "" ]
https://huggingface.co/papers/2407.05282
7
12
1
10
[]
[ "BleachNick/UltraEdit_500k", "BleachNick/UltraEdit", "BleachNick/UltraEdit_Region_Based_100k" ]
[]
[]
[ "BleachNick/UltraEdit_500k", "BleachNick/UltraEdit", "BleachNick/UltraEdit_Region_Based_100k" ]
[]
1
null
https://openreview.net/forum?id=930e8v5ctj
@inproceedings{ kazemi2024remi, title={Re{MI}: A Dataset for Reasoning with Multiple Images}, author={Mehran Kazemi and Nishanth Dikkala and Ankit Anand and Petar Devic and Ishita Dasgupta and Fangyu Liu and Bahare Fatemi and Pranjal Awasthi and Sreenivas Gollapudi and Dee Guo and Ahmed Qureshi}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=930e8v5ctj} }
With the continuous advancement of large language models (LLMs), it is essential to create new benchmarks to evaluate their expanding capabilities and identify areas for improvement. This work focuses on multi-image reasoning, an emerging capability in state-of-the-art LLMs. We introduce ReMI, a dataset designed to assess LLMs' ability to reason with multiple images. This dataset encompasses a diverse range of tasks, spanning various reasoning domains such as math, physics, logic, code, table/chart understanding, and spatial and temporal reasoning. It also covers a broad spectrum of characteristics found in multi-image reasoning scenarios. We have benchmarked several cutting-edge LLMs using ReMI and found a substantial gap between their performance and human-level proficiency. This highlights the challenges in multi-image reasoning and the need for further research. Our analysis also reveals the strengths and weaknesses of different models, shedding light on the types of reasoning that are currently attainable and areas where future models require improvement. We anticipate that ReMI will be a valuable resource for developing and evaluating more sophisticated LLMs capable of handling real-world multi-image understanding tasks.
ReMI: A Dataset for Reasoning with Multiple Images
[ "Mehran Kazemi", "Nishanth Dikkala", "Ankit Anand", "Petar Devic", "Ishita Dasgupta", "Fangyu Liu", "Bahare Fatemi", "Pranjal Awasthi", "Sreenivas Gollapudi", "Dee Guo", "Ahmed Qureshi" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09175
[ "" ]
https://huggingface.co/papers/2406.09175
0
0
0
11
[]
[ "mehrankazemi/ReMI" ]
[]
[]
[ "mehrankazemi/ReMI" ]
[]
1
null
https://openreview.net/forum?id=8m6zw8Jur0
@inproceedings{ roberts2024imagestruct, title={Image2Struct: Benchmarking Structure Extraction for Vision-Language Models}, author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=8m6zw8Jur0} }
We introduce Image2Struct, a benchmark to evaluate vision-language models (VLMs) on extracting structure from images. Our benchmark 1) captures real-world use cases, 2) is fully automatic and does not require human judgment, and 3) is based on a renewable stream of fresh data. In Image2Struct, VLMs are prompted to generate the underlying structure (e.g., LaTeX code or HTML) from an input image (e.g., webpage screenshot). The structure is then rendered to produce an output image (e.g., rendered webpage), which is compared against the input image to produce a similarity score. This round-trip evaluation allows us to quantitatively evaluate VLMs on tasks with multiple valid structures. We create a pipeline that downloads fresh data from active online communities upon execution and evaluates the VLMs without human intervention. We introduce three domains (Webpages, LaTeX, and Musical Scores) and use five image metrics (pixel similarity, cosine similarity between the Inception vectors, learned perceptual image patch similarity, structural similarity index measure, and earth mover similarity) that allow efficient and automatic comparison between pairs of images. We evaluate Image2Struct on 14 prominent VLMs and find that scores vary widely, indicating that Image2Struct can differentiate between the performances of different VLMs. Additionally, the best score varies considerably across domains (e.g., 0.402 on sheet music vs. 0.830 on LaTeX equations), indicating that Image2Struct contains tasks of varying difficulty. For transparency, we release the full results at https://crfm.stanford.edu/helm/image2struct/v1.0.1/.
Image2Struct: Benchmarking Structure Extraction for Vision-Language Models
[ "Josselin Somerville Roberts", "Tony Lee", "Chi Heem Wong", "Michihiro Yasunaga", "Yifan Mai", "Percy Liang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.22456
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=8kFctyli9H
@inproceedings{ wei2024proving, title={Proving Olympiad Algebraic Inequalities without Human Demonstrations}, author={Chenrui Wei and Mengzhou Sun and Wei Wang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=8kFctyli9H} }
Solving Olympiad-level mathematical problems represents a significant advancement in machine intelligence and automated reasoning. Current machine learning methods, however, struggle to solve Olympiad-level problems beyond Euclidean plane geometry due to a lack of large-scale, high-quality datasets. The challenge is even greater in algebraic systems, which involve infinite reasoning spaces within finite conditions. To address these issues, we propose *AIPS*, an *Algebraic Inequality Proving System* capable of autonomously generating complex inequality theorems and effectively solving Olympiad-level inequality problems without requiring human demonstrations. During proof search in a mixed reasoning manner, a value curriculum learning strategy on generated datasets is implemented to improve proving performance, demonstrating strong mathematical intuitions. On a test set of 20 International Mathematical Olympiad-level inequality problems, AIPS successfully solved 10, outperforming state-of-the-art methods. Furthermore, AIPS automatically generated a vast array of non-trivial theorems without human intervention, some of which have been evaluated by professional contestants and deemed to reach the level of the International Mathematical Olympiad. Notably, one theorem was selected as a competition problem in a major city's 2024 Mathematical Olympiad. All the materials are available at [sites.google.com/view/aips2](https://sites.google.com/view/aips2)
Proving Olympiad Algebraic Inequalities without Human Demonstrations
[ "Chenrui Wei", "Mengzhou Sun", "Wei Wang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.14219
[ "" ]
https://huggingface.co/papers/2406.14219
1
0
0
3
[]
[ "llllvvuu/AIPS_inequalities" ]
[]
[]
[ "llllvvuu/AIPS_inequalities" ]
[]
1
null
https://openreview.net/forum?id=8hUUy3hoS8
@inproceedings{ wu2024streambench, title={StreamBench: Towards Benchmarking Continuous Improvement of Language Agents}, author={Cheng-Kuang Wu and Zhi Rui Tam and Chieh-Yen Lin and Yun-Nung Chen and Hung-yi Lee}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=8hUUy3hoS8} }
Recent works have shown that large language model (LLM) agents are able to improve themselves from experience, which is an important ability for continuous enhancement post-deployment. However, existing benchmarks primarily evaluate their innate capabilities and do not assess their ability to improve over time. To address this gap, we introduce StreamBench, a pioneering benchmark designed to evaluate the continuous improvement of LLM agents over an input-feedback sequence. StreamBench simulates an online learning environment where LLMs receive a continuous flow of feedback stream and iteratively enhance their performance. In addition, we propose several simple yet effective baselines for improving LLMs on StreamBench, and provide a comprehensive analysis to identify critical components that contribute to successful streaming strategies. Our work serves as a stepping stone towards developing effective online learning strategies for LLMs, paving the way for more adaptive AI systems in streaming scenarios.
StreamBench: Towards Benchmarking Continuous Improvement of Language Agents
[ "Cheng-Kuang Wu", "Zhi Rui Tam", "Chieh-Yen Lin", "Yun-Nung Chen", "Hung-yi Lee" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.08747
[ "https://github.com/stream-bench/stream-bench" ]
https://huggingface.co/papers/2406.08747
0
1
0
5
[]
[ "appier-ai-research/StreamBench" ]
[]
[]
[ "appier-ai-research/StreamBench" ]
[]
1
null
https://openreview.net/forum?id=8RaxRs5VDf
@inproceedings{ li2024lexeval, title={LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large Language Models}, author={Haitao Li and You Chen and Qingyao Ai and Yueyue WU and Ruizhe Zhang and Yiqun LIU}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=8RaxRs5VDf} }
Large language models (LLMs) have made significant progress in natural language processing tasks and demonstrate considerable potential in the legal domain. However, legal applications demand high standards of accuracy, reliability, and fairness. Applying existing LLMs to legal systems without careful evaluation of their potential and limitations could pose significant risks in legal practice. To this end, we introduce a standardized comprehensive Chinese legal benchmark LexEval. This benchmark is notable in the following three aspects: (1) Ability Modeling: We propose a new taxonomy of legal cognitive abilities to organize different tasks. (2) Scale: To our knowledge, LexEval is currently the largest Chinese legal evaluation dataset, comprising 23 tasks and 14,150 questions. (3) Data: we utilize formatted existing datasets, exam datasets and newly annotated datasets by legal experts to comprehensively evaluate the various capabilities of LLMs. LexEval not only focuses on the ability of LLMs to apply fundamental legal knowledge but also dedicates efforts to examining the ethical issues involved in their application. We evaluated 38 open-source and commercial LLMs and obtained some interesting findings. The experiments and findings offer valuable insights into the challenges and potential solutions for developing Chinese legal systems and LLM evaluation pipelines. The LexEval dataset and leaderboard are publicly available at https://github.com/CSHaitao/LexEval and will be continuously updated.
LexEval: A Comprehensive Chinese Legal Benchmark for Evaluating Large Language Models
[ "Haitao Li", "You Chen", "Qingyao Ai", "Yueyue WU", "Ruizhe Zhang", "Yiqun LIU" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.20288
[ "https://github.com/cshaitao/lexeval" ]
https://huggingface.co/papers/2409.20288
0
0
0
6
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=8J8w43S9kr
@inproceedings{ dumpala2024sugarcrepe, title={{SUGARCREPE}++ Dataset: Vision-Language Model Sensitivity to Semantic and Lexical Alterations}, author={Sri Harsha Dumpala and Aman Jaiswal and Chandramouli Shama Sastry and Evangelos Milios and Sageev Oore and Hassan Sajjad}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=8J8w43S9kr} }
Despite their remarkable successes, state-of-the-art large language models (LLMs), including vision-and-language models (VLMs) and unimodal language models (ULMs), fail to understand precise semantics. For example, semantically equivalent sentences expressed using different lexical compositions elicit diverging representations. The degree of this divergence and its impact on encoded semantics is not very well understood. In this paper, we introduce the SUGARCREPE++ dataset to analyze the sensitivity of VLMs and ULMs to lexical and semantic alterations. Each sample in SUGARCREPE++ dataset consists of an image and a corresponding triplet of captions: a pair of semantically equivalent but lexically different positive captions and one hard negative caption. This poses a 3-way semantic (in)equivalence problem to the language models. We comprehensively evaluate VLMs and ULMs that differ in architecture, pre-training objectives and datasets to benchmark the performance of SUGARCREPE++ dataset. Experimental results highlight the difficulties of VLMs in distinguishing between lexical and semantic variations, particularly to object attributes and spatial relations. Although VLMs with larger pre-training datasets, model sizes, and multiple pre-training objectives achieve better performance on SUGARCREPE++, there is a significant opportunity for improvement. We demonstrate that models excelling on compositionality datasets may not perform equally well on SUGARCREPE++. This indicates that compositionality alone might not be sufficient to fully understand semantic and lexical alterations. Given the importance of the property that the SUGARCREPE++ dataset targets, it serves as a new challenge to the vision-and-language community. Data and code is available at https://github.com/Sri-Harsha/scpp.
SUGARCREPE++ Dataset: Vision-Language Model Sensitivity to Semantic and Lexical Alterations
[ "Sri Harsha Dumpala", "Aman Jaiswal", "Chandramouli Shama Sastry", "Evangelos Milios", "Sageev Oore", "Hassan Sajjad" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.11171
[ "https://github.com/Sri-Harsha/scpp" ]
https://huggingface.co/papers/2406.11171
0
0
0
6
[]
[ "Aman-J/SugarCrepe_pp" ]
[]
[]
[ "Aman-J/SugarCrepe_pp" ]
[]
1
null
https://openreview.net/forum?id=7ey2ugXs36
@inproceedings{ dong2024cleandiffuser, title={CleanDiffuser: An Easy-to-use Modularized Library for Diffusion Models in Decision Making}, author={Zibin Dong and Yifu Yuan and Jianye HAO and Fei Ni and Yi Ma and Pengyi Li and YAN ZHENG}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=7ey2ugXs36} }
Leveraging the powerful generative capability of diffusion models (DMs) to build decision-making agents has achieved extensive success. However, there is still a demand for an easy-to-use and modularized open-source library that offers customized and efficient development for DM-based decision-making algorithms. In this work, we introduce **CleanDiffuser**, the first DM library specifically designed for decision-making algorithms. By revisiting the roles of DMs in the decision-making domain, we identify a set of essential sub-modules that constitute the core of CleanDiffuser, allowing for the implementation of various DM algorithms with simple and flexible building blocks. To demonstrate the reliability and flexibility of CleanDiffuser, we conduct comprehensive evaluations of various DM algorithms implemented with CleanDiffuser across an extensive range of tasks. The analytical experiments provide a wealth of valuable design choices and insights, reveal opportunities and challenges, and lay a solid groundwork for future research. CleanDiffuser will provide long-term support to the decision-making community, enhancing reproducibility and fostering the development of more robust solutions.
CleanDiffuser: An Easy-to-use Modularized Library for Diffusion Models in Decision Making
[ "Zibin Dong", "Yifu Yuan", "Jianye HAO", "Fei Ni", "Yi Ma", "Pengyi Li", "YAN ZHENG" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09509
[ "https://github.com/cleandiffuserteam/cleandiffuser" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=7TCK0aBL1C
@inproceedings{ kon2024iaceval, title={IaC-Eval: A Code Generation Benchmark for Cloud Infrastructure-as-Code Programs}, author={Patrick Tser Jern Kon and Jiachen Liu and Yiming Qiu and Weijun Fan and Ting He and Lei Lin and Haoran Zhang and Owen M. Park and George Sajan Elengikal and Yuxin Kang and Ang Chen and Mosharaf Chowdhury and Myungjin Lee and Xinyu Wang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=7TCK0aBL1C} }
Infrastructure-as-Code (IaC), an important component of cloud computing, allows the definition of cloud infrastructure in high-level programs. However, developing IaC programs is challenging, complicated by factors that include the burgeoning complexity of the cloud ecosystem (e.g., diversity of cloud services and workloads), and the relative scarcity of IaC-specific code examples and public repositories. While large language models (LLMs) have shown promise in general code generation and could potentially aid in IaC development, no benchmarks currently exist for evaluating their ability to generate IaC code. We present IaC-Eval, a first step in this research direction. IaC-Eval's dataset includes 458 human-curated scenarios covering a wide range of popular AWS services, at varying difficulty levels. Each scenario mainly comprises a natural language IaC problem description and an infrastructure intent specification. The former is fed as user input to the LLM, while the latter is a general notion used to verify if the generated IaC program conforms to the user's intent; by making explicit the problem's requirements that can encompass various cloud services, resources and internal infrastructure details. Our in-depth evaluation shows that contemporary LLMs perform poorly on IaC-Eval, with the top-performing model, GPT-4, obtaining a pass@1 accuracy of 19.36%. In contrast, it scores 86.6% on EvalPlus, a popular Python code generation benchmark, highlighting a need for advancements in this domain. We open-source the IaC-Eval dataset and evaluation framework at https://github.com/autoiac-project/iac-eval to enable future research on LLM-based IaC code generation.
IaC-Eval: A Code Generation Benchmark for Cloud Infrastructure-as-Code Programs
[ "Patrick Tser Jern Kon", "Jiachen Liu", "Yiming Qiu", "Weijun Fan", "Ting He", "Lei Lin", "Haoran Zhang", "Owen M. Park", "George Sajan Elengikal", "Yuxin Kang", "Ang Chen", "Mosharaf Chowdhury", "Myungjin Lee", "Xinyu Wang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=70iM5TBkN5
@inproceedings{ wei2024a, title={A Large-Scale Human-Centric Benchmark for Referring Expression Comprehension in the {LMM} Era}, author={Fangyun Wei and Jinjing Zhao and Kun Yan and Hongyang Zhang and Chang Xu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=70iM5TBkN5} }
Prior research in human-centric AI has primarily addressed single-modality tasks like pedestrian detection, action recognition, and pose estimation. However, the emergence of large multimodal models (LMMs) such as GPT-4V has redirected attention towards integrating language with visual content. Referring expression comprehension (REC) represents a prime example of this multimodal approach. Current human-centric REC benchmarks, typically sourced from general datasets, fall short in the LMM era due to their limitations, such as insufficient testing samples, overly concise referring expressions, and limited vocabulary, making them inadequate for evaluating the full capabilities of modern REC models. In response, we present HC-RefLoCo (Human-Centric Referring Expression Comprehension with Long Context), a benchmark that includes 13,452 images, 24,129 instances, and 44,738 detailed annotations, encompassing a vocabulary of 18,681 words. Each annotation, meticulously reviewed for accuracy, averages 93.2 words and includes topics such as appearance, human-object interaction, location, action, celebrity, and OCR. HC-RefLoCo provides a wider range of instance scales and diverse evaluation protocols, encompassing accuracy with various IoU criteria, scale-aware evaluation, and subject-specific assessments. Our experiments, which assess 24 models, highlight HC-RefLoCo’s potential to advance human-centric AI by challenging contemporary REC models with comprehensive and varied data. Our benchmark, along with the evaluation code, are available at https://github.com/ZhaoJingjing713/HC-RefLoCo.
A Large-Scale Human-Centric Benchmark for Referring Expression Comprehension in the LMM Era
[ "Fangyun Wei", "Jinjing Zhao", "Kun Yan", "Hongyang Zhang", "Chang Xu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6vFy6H4mTI
@inproceedings{ wang2024urbandatalayer, title={UrbanDataLayer: A Unified Data Pipeline for Urban Science}, author={Yiheng Wang and Tianyu Wang and YuYing Zhang and Hongji Zhang and Haoyu Zheng and Guanjie Zheng and Linghe Kong}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=6vFy6H4mTI} }
The rapid progression of urbanization has generated a diverse array of urban data, facilitating significant advancements in urban science and urban computing. Current studies often work on separate problems case by case using diverse data, e.g., air quality prediction, and built-up areas classification. This fragmented approach hinders the urban research field from advancing at the pace observed in Computer Vision and Natural Language Processing, due to two primary reasons. On the one hand, the diverse data processing steps lead to the lack of large-scale benchmarks and therefore decelerate iterative methodology improvement on a single problem. On the other hand, the disparity in multi-modal data formats hinders the combination of the related modal data to stimulate more research findings. To address these challenges, we propose UrbanDataLayer (UDL), a suite of standardized data structures and pipelines for city data engineering, providing a unified data format for researchers. This allows researchers to easily build up large-scale benchmarks and combine multi-modal data, thus expediting the development of multi-modal urban foundation models. To verify the effectiveness of our work, we present four distinct urban problem tasks utilizing the proposed data layer. UrbanDataLayer aims to enhance standardization and operational efficiency within the urban science research community. The examples and source code are available at https://github.com/SJTU-CILAB/udl.
UrbanDataLayer: A Unified Data Pipeline for Urban Science
[ "Yiheng Wang", "Tianyu Wang", "YuYing Zhang", "Hongji Zhang", "Haoyu Zheng", "Guanjie Zheng", "Linghe Kong" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6kc6Hdyknx
@inproceedings{ salehi2024actionatlas, title={ActionAtlas: A Video{QA} Benchmark for Domain-specialized Action Recognition}, author={Mohammadreza Salehi and Jae Sung Park and Aditya Kusupati and Ranjay Krishna and Yejin Choi and Hannaneh Hajishirzi and Ali Farhadi}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=6kc6Hdyknx} }
Our world is full of varied actions and moves in specialized fields that we, as humans, seek to identify and learn about. To evaluate the effectiveness of multi-modal models in helping us recognize such fine-grained actions, we introduce ActionAtlas, a video question answering (VideoQA) benchmark on fine-grained action recognition with short videos across various sports. ActionAtlas contains 554 videos spanning 284 actions across 42 sports with 1161 actions as total potential choices. Unlike most existing action recognition benchmarks that focus on simplistic actions, often identifiable from a single frame, ActionAtlas focuses on intricate movements and tests the models' ability to discern subtle differences. Additionally, each video in ActionAtlas also includes a question, which helps to more accurately pinpoint the action's performer in scenarios where multiple individuals are involved in different activities. We evaluate proprietary and open models on this benchmark and show that the state-of-the-art models only perform at most 48.73% accurately where random chance is 20%. Furthermore, our results show that a high frame sampling rate is essential for recognizing actions in ActionAtlas, a feature that current top proprietary models like Gemini lack in their default settings.
ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition
[ "Mohammadreza Salehi", "Jae Sung Park", "Aditya Kusupati", "Ranjay Krishna", "Yejin Choi", "Hannaneh Hajishirzi", "Ali Farhadi" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.05774
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6cCFK69vJI
@inproceedings{ prabowo2024building, title={Building Timeseries Dataset: Empowering Large-Scale Building Analytics}, author={Arian Prabowo and Xiachong LIN and Imran Razzak and Hao Xue and Emily W. Yap and Matt Amos and Flora D. Salim}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=6cCFK69vJI} }
Buildings play a crucial role in human well-being, influencing occupant comfort, health, and safety. Additionally, they contribute significantly to global energy consumption, accounting for one-third of total energy usage, and carbon emissions. Optimizing building performance presents a vital opportunity to combat climate change and promote human flourishing. However, research in building analytics has been hampered by the lack of accessible, available, and comprehensive real-world datasets on multiple building operations. In this paper, we introduce the Building TimeSeries (BTS) dataset. Our dataset covers three buildings over a three-year period, comprising more than ten thousand timeseries data points with hundreds of unique ontologies. Moreover, the metadata is standardized using the Brick schema. To demonstrate the utility of this dataset, we performed benchmarks on two tasks: timeseries ontology classification and zero-shot forecasting. These tasks represent an essential initial step in addressing challenges related to interoperability in building analytics. Access to the dataset and the code used for benchmarking are available here: https://github.com/cruiseresearchgroup/DIEF\_BTS
Building Timeseries Dataset: Empowering Large-Scale Building Analytics
[ "Arian Prabowo", "Xiachong LIN", "Imran Razzak", "Hao Xue", "Emily W. Yap", "Matt Amos", "Flora D. Salim" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "https://github.com/cruiseresearchgroup/dief_bts" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=6UQPx8SMXy
@inproceedings{ stergiou2024lavib, title={{LAVIB}: A Large-scale Video Interpolation Benchmark}, author={Alexandros Stergiou}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=6UQPx8SMXy} }
This paper introduces a LArge-scale Video Interpolation Benchmark (LAVIB) for the low-level video task of Video Frame Interpolation (VFI). LAVIB comprises a large collection of high-resolution videos sourced from the web through an automated pipeline with minimal requirements for human verification. Metrics are computed for each video's motion magnitudes, luminance conditions, frame sharpness, and contrast. The collection of videos and the creation of quantitative challenges based on these metrics are under-explored by current low-level video task datasets. In total, LAVIB includes 283K clips from 17K ultra-HD videos, covering 77.6 hours. Benchmark train, val, and test sets maintain similar video metric distributions. Further splits are also created for out-of-distribution (OOD) challenges, with train and test splits including videos of dissimilar attributes.
LAVIB: A Large-scale Video Interpolation Benchmark
[ "Alexandros Stergiou" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09754
[ "https://github.com/alexandrosstergiou/lavib" ]
https://huggingface.co/papers/2406.09754
1
0
0
1
[]
[ "astergiou/LAVIB" ]
[]
[]
[ "astergiou/LAVIB" ]
[]
1
null
https://openreview.net/forum?id=66XJOENOrL
@inproceedings{ ma2024srfund, title={{SRFUND}: A Multi-Granularity Hierarchical Structure Reconstruction Benchmark in Form Understanding}, author={Jiefeng Ma and Yan Wang and Chenyu Liu and Jun Du and Yu Hu and Zhang Zhenrong and Pengfei Hu and Qing Wang and Jianshu Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=66XJOENOrL} }
Accurately identifying and organizing textual content is crucial for the automation of document processing in the field of form understanding. Existing datasets, such as FUNSD and XFUND, support entity classification and relationship prediction tasks but are typically limited to local and entity-level annotations. This limitation overlooks the hierarchically structured representation of documents, constraining comprehensive understanding of complex forms. To address this issue, we present the SRFUND, a hierarchically structured multi-task form understanding benchmark. SRFUND provides refined annotations on top of the original FUNSD and XFUND datasets, encompassing five tasks: (1) word to text-line merging, (2) text-line to entity merging, (3) entity category classification, (4) item table localization, and (5) entity-based full-document hierarchical structure recovery. We meticulously supplemented the original dataset with missing annotations at various levels of granularity and added detailed annotations for multi-item table regions within the forms. Additionally, we introduce global hierarchical structure dependencies for entity relation prediction tasks, surpassing traditional local key-value associations. The SRFUND dataset includes eight languages including English, Chinese, Japanese, German, French, Spanish, Italian, and Portuguese, making it a powerful tool for cross-lingual form understanding. Extensive experimental results demonstrate that the SRFUND dataset presents new challenges and significant opportunities in handling diverse layouts and global hierarchical structures of forms, thus providing deep insights into the field of form understanding. The original dataset and implementations of baseline methods are available at https://sprateam-ustc.github.io/SRFUND.
SRFUND: A Multi-Granularity Hierarchical Structure Reconstruction Benchmark in Form Understanding
[ "Jiefeng Ma", "Yan Wang", "Chenyu Liu", "Jun Du", "Yu Hu", "Zhang Zhenrong", "Pengfei Hu", "Qing Wang", "Jianshu Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.08757
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=66PcEzkf95
@inproceedings{ longpre2024consent, title={Consent in Crisis: The Rapid Decline of the {AI} Data Commons}, author={Shayne Longpre and Robert Mahari and Ariel N. Lee and Campbell S. Lund and Hamidah Oderinwale and William Brannon and Nayan Saxena and Naana Obeng-Marnu and Tobin South and Cole J Hunter and Kevin Klyman and Christopher Klamm and Hailey Schoelkopf and Nikhil Singh and Manuel Cherep and Ahmad Mustafa Anis and An Dinh and Caroline Shamiso Chitongo and Da Yin and Damien Sileo and Deividas Mataciunas and Diganta Misra and Emad A. Alghamdi and Enrico Shippole and Jianguo Zhang and Joanna Materzynska and Kun Qian and Kushagra Tiwary and Lester James Validad Miranda and Manan Dey and Minnie Liang and Mohammed Hamdy and Niklas Muennighoff and Seonghyeon Ye and Seungone Kim and Shrestha Mohanty and Vipul Gupta and Vivek Sharma and Vu Minh Chien and Xuhui Zhou and Yizhi LI and Caiming Xiong and Luis Villa and Stella Biderman and Hanlin Li and Daphne Ippolito and Sara Hooker and Jad Kabbara and Alex Pentland}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=66PcEzkf95} }
General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, RefinedWeb, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14,000 web domains provides an expansive view of crawlable web data and how codified data use preferences are changing over time. We observe a proliferation of AI-specific clauses to limit use, acute differences in restrictions on AI developers, as well as general inconsistencies between websites' expressed intentions in their Terms of Service and their robots.txt. We diagnose these as symptoms of ineffective web protocols, not designed to cope with the widespread re-purposing of the internet for AI. Our longitudinal analyses show that in a single year (2023-2024) there has been a rapid crescendo of data restrictions from web sources, rendering ~5\%+ of all tokens in C4, or 28%+ of the most actively maintained, critical sources in C4, fully restricted from use. For Terms of Service crawling restrictions, a full 45% of C4 is now restricted. If respected or enforced, these restrictions are rapidly biasing the diversity, freshness, and scaling laws for general-purpose AI systems. We hope to illustrate the emerging crises in data consent, for both developers and creators. The foreclosure of much of the open web will impact not only commercial AI, but also non-commercial AI and academic research.
Consent in Crisis: The Rapid Decline of the AI Data Commons
[ "Shayne Longpre", "Robert Mahari", "Ariel N. Lee", "Campbell S. Lund", "Hamidah Oderinwale", "William Brannon", "Nayan Saxena", "Naana Obeng-Marnu", "Tobin South", "Cole J Hunter", "Kevin Klyman", "Christopher Klamm", "Hailey Schoelkopf", "Nikhil Singh", "Manuel Cherep", "Ahmad Mustafa Anis", "An Dinh", "Caroline Shamiso Chitongo", "Da Yin", "Damien Sileo", "Deividas Mataciunas", "Diganta Misra", "Emad A. Alghamdi", "Enrico Shippole", "Jianguo Zhang", "Joanna Materzynska", "Kun Qian", "Kushagra Tiwary", "Lester James Validad Miranda", "Manan Dey", "Minnie Liang", "Mohammed Hamdy", "Niklas Muennighoff", "Seonghyeon Ye", "Seungone Kim", "Shrestha Mohanty", "Vipul Gupta", "Vivek Sharma", "Vu Minh Chien", "Xuhui Zhou", "Yizhi LI", "Caiming Xiong", "Luis Villa", "Stella Biderman", "Hanlin Li", "Daphne Ippolito", "Sara Hooker", "Jad Kabbara", "Alex Pentland" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.14933
[ "" ]
https://huggingface.co/papers/2407.14933
12
12
2
49
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=64sZtFSOh6
@inproceedings{ haresh2024clevrskills, title={ClevrSkills: Compositional Language And Visual Reasoning in Robotics}, author={Sanjay Haresh and Daniel Dijkman and Apratim Bhattacharyya and Roland Memisevic}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=64sZtFSOh6} }
Robotics tasks are highly compositional by nature. For example, to perform a high-level task like cleaning the table a robot must employ low-level capabilities of moving the effectors to the objects on the table, pick them up and then move them off the table one-by-one, while re-evaluating the consequently dynamic scenario in the process. Given that large vision language models (VLMs) have shown progress on many tasks that require high level, human-like reasoning, we ask the question: if the models are taught the requisite low-level capabilities, can they compose them in novel ways to achieve interesting high-level tasks like cleaning the table without having to be explicitly taught so? To this end, we present ClevrSkills - a benchmark suite for compositional reasoning in robotics. ClevrSkills is an environment suite developed on top of the ManiSkill2 simulator and an accompanying dataset. The dataset contains trajectories generated on a range of robotics tasks with language and visual annotations as well as multi-modal prompts as task specification. The suite includes a curriculum of tasks with three levels of compositional understanding, starting with simple tasks requiring basic motor skills. We benchmark multiple different VLM baselines on ClevrSkills and show that even after being pre-trained on large numbers of tasks, these models fail on compositional reasoning in robotics tasks.
ClevrSkills: Compositional Language And Visual Reasoning in Robotics
[ "Sanjay Haresh", "Daniel Dijkman", "Apratim Bhattacharyya", "Roland Memisevic" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.09052
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5t7DtLwTVC
@inproceedings{ hou2024wikicontradict, title={WikiContradict: A Benchmark for Evaluating {LLM}s on Real-World Knowledge Conflicts from Wikipedia}, author={Yufang Hou and Alessandra Pascale and Javier Carnerero-Cano and Tigran T. Tchrakian and Radu Marinescu and Elizabeth M. Daly and Inkit Padhi and Prasanna Sattigeri}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=5t7DtLwTVC} }
Retrieval-augmented generation (RAG) has emerged as a promising solution to mitigate the limitations of large language models (LLMs), such as hallucinations and outdated information. However, it remains unclear how LLMs handle knowledge conflicts arising from different augmented retrieved passages, especially when these passages originate from the same source and have equal trustworthiness. In this work, we conduct a comprehensive evaluation of LLM-generated answers to questions that have varying answers based on contradictory passages from Wikipedia, a dataset widely regarded as a high-quality pre-training resource for most LLMs. Specifically, we introduce WikiContradict, a benchmark consisting of 253 high-quality, human-annotated instances designed to assess the performance of LLMs in providing a complete perspective on conflicts from the retrieved documents, rather than choosing one answer over another, when augmented with retrieved passages containing real-world knowledge conflicts. We benchmark a diverse range of both closed and open-source LLMs under different QA scenarios, including RAG with a single passage, and RAG with 2 contradictory passages. Through rigorous human evaluations on a subset of WikiContradict instances involving 5 LLMs and over 3,500 judgements, we shed light on the behaviour and limitations of these models. For instance, when provided with two passages containing contradictory facts, all models struggle to generate answers that accurately reflect the conflicting nature of the context, especially for implicit conflicts requiring reasoning. Since human evaluation is costly, we also introduce an automated model that estimates LLM performance using a strong open-source language model, achieving an F-score of 0.8. Using this automated metric, we evaluate more than 1,500 answers from seven LLMs across all WikiContradict instances.
WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia
[ "Yufang Hou", "Alessandra Pascale", "Javier Carnerero-Cano", "Tigran T. Tchrakian", "Radu Marinescu", "Elizabeth M. Daly", "Inkit Padhi", "Prasanna Sattigeri" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.13805
[ "" ]
https://huggingface.co/papers/2406.13805
2
0
0
8
[]
[ "ibm/Wikipedia_contradict_benchmark" ]
[]
[]
[ "ibm/Wikipedia_contradict_benchmark" ]
[]
1
null
https://openreview.net/forum?id=5c1hh8AeHv
@inproceedings{ zhang2024multitrust, title={MultiTrust: A Comprehensive Benchmark Towards Trustworthy Multimodal Large Language Models}, author={Yichi Zhang and Yao Huang and Yitong Sun and Chang Liu and Zhe Zhao and Zhengwei Fang and Yifan Wang and Huanran Chen and Xiao Yang and Xingxing Wei and Hang Su and Yinpeng Dong and Jun Zhu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=5c1hh8AeHv} }
Despite the superior capabilities of Multimodal Large Language Models (MLLMs) across diverse tasks, they still face significant trustworthiness challenges. Yet, current literature on the assessment of trustworthy MLLMs remains limited, lacking a holistic evaluation to offer thorough insights into future improvements. In this work, we establish **MultiTrust**, the first comprehensive and unified benchmark on the trustworthiness of MLLMs across five primary aspects: *truthfulness*, *safety*, *robustness*, *fairness*, and *privacy*. Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts, encompassing 32 diverse tasks with self-curated datasets. Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks, highlighting the complexities introduced by the multimodality and underscoring the necessity for advanced methodologies to enhance their reliability. For instance, typical proprietary models still struggle with the perception of visually confusing images and are vulnerable to multimodal jailbreaking and adversarial attacks; MLLMs are more inclined to disclose privacy in text and reveal ideological and cultural biases even when paired with irrelevant images in inference, indicating that the multimodality amplifies the internal risks from base LLMs. Additionally, we release a scalable toolbox for standardized trustworthiness research, aiming to facilitate future advancements in this important field. Code and resources are publicly available at: [https://multi-trust.github.io/](https://multi-trust.github.io/).
MultiTrust: A Comprehensive Benchmark Towards Trustworthy Multimodal Large Language Models
[ "Yichi Zhang", "Yao Huang", "Yitong Sun", "Chang Liu", "Zhe Zhao", "Zhengwei Fang", "Yifan Wang", "Huanran Chen", "Xiao Yang", "Xingxing Wei", "Hang Su", "Yinpeng Dong", "Jun Zhu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5VtI484yVy
@inproceedings{ bortolotti2024a, title={A Neuro-Symbolic Benchmark Suite for Concept Quality and Reasoning Shortcuts}, author={Samuele Bortolotti and Emanuele Marconato and Tommaso Carraro and Paolo Morettin and Emile van Krieken and Antonio Vergari and Stefano Teso and Andrea Passerini}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=5VtI484yVy} }
The advent of powerful neural classifiers has increased interest in problems that require both learning and reasoning. These problems are critical for understanding important properties of models, such as trustworthiness, generalization, interpretability, and compliance to safety and structural constraints. However, recent research observed that tasks requiring both learning and reasoning on background knowledge often suffer from reasoning shortcuts (RSs): predictors can solve the downstream reasoning task without associating the correct concepts to the high-dimensional data. To address this issue, we introduce rsbench, a comprehensive benchmark suite designed to systematically evaluate the impact of RSs on models by providing easy access to highly customizable tasks affected by RSs. Furthermore, rsbench implements common metrics for evaluating concept quality and introduces novel formal verification procedures for assessing the presence of RSs in learning tasks. Using rsbench, we highlight that obtaining high quality concepts in both purely neural and neuro-symbolic models is a far-from-solved problem. rsbench is available at: https://unitn-sml.github.io/rsbench.
A Neuro-Symbolic Benchmark Suite for Concept Quality and Reasoning Shortcuts
[ "Samuele Bortolotti", "Emanuele Marconato", "Tommaso Carraro", "Paolo Morettin", "Emile van Krieken", "Antonio Vergari", "Stefano Teso", "Andrea Passerini" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.10368
[ "https://github.com/unitn-sml/rsbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5S0y3OhfRs
@inproceedings{ liang2024ovtb, title={{OVT}-B: A New Large-Scale Benchmark for Open-Vocabulary Multi-Object Tracking}, author={Haiji Liang and Ruize Han}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=5S0y3OhfRs} }
Open-vocabulary object perception has become an important topic in artificial intelligence, which aims to identify objects with novel classes that have not been seen during training. Under this setting, open-vocabulary object detection (OVD) in a single image has been studied in many literature. However, open-vocabulary object tracking (OVT) from a video has been studied less, and one reason is the shortage of benchmarks. In this work, we have built a new large-scale benchmark for open-vocabulary multi-object tracking namely OVT-B. OVT-B contains 1,048 categories of objects and 1,973 videos with 637,608 bounding box annotations, which is much larger than the sole open-vocabulary tracking dataset, i.e., OVTAO-val dataset (200+ categories, 900+ videos). The proposed OVT-B can be used as a new benchmark to pave the way for OVT research. We also develop a simple yet effective baseline method for OVT. It integrates the motion features for object tracking, which is an important feature for MOT but is ignored in previous OVT methods. Experimental results have verified the usefulness of the proposed benchmark and the effectiveness of our method. We have released the benchmark to the public at https://github.com/Coo1Sea/OVT-B-Dataset.
OVT-B: A New Large-Scale Benchmark for Open-Vocabulary Multi-Object Tracking
[ "Haiji Liang", "Ruize Han" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.17534
[ "https://github.com/coo1sea/ovt-b-dataset" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=5OZTcbgCyH
@inproceedings{ kwon2024ehrcon, title={{EHRC}on: Dataset for Checking Consistency between Unstructured Notes and Structured Tables in Electronic Health Records}, author={Yeonsu Kwon and Jiho Kim and Gyubok Lee and Seongsu Bae and Daeun Kyung and Wonchul Cha and Tom Pollard and ALISTAIR JOHNSON and Edward Choi}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=5OZTcbgCyH} }
Electronic Health Records (EHRs) are integral for storing comprehensive patient medical records, combining structured data (e.g., medications) with detailed clinical notes (e.g., physician notes). These elements are essential for straightforward data retrieval and provide deep, contextual insights into patient care. However, they often suffer from discrepancies due to unintuitive EHR system designs and human errors, posing serious risks to patient safety. To address this, we developed EHRCon, a new dataset and task specifically designed to ensure data consistency between structured tables and unstructured notes in EHRs. EHRCon was crafted in collaboration with healthcare professionals using the MIMIC-III EHR dataset, and includes manual annotations of 3,943 entities across 105 clinical notes checked against database entries for consistency. EHRCon has two versions, one using the original MIMIC-III schema, and another using the OMOP CDM schema, in order to increase its applicability and generalizability. Furthermore, leveraging the capabilities of large language models, we introduce CheckEHR, a novel framework for verifying the consistency between clinical notes and database tables. CheckEHR utilizes an eight-stage process and shows promising results in both few-shot and zero-shot settings. The code is available at \url{https://github.com/dustn1259/EHRCon}.
EHRCon: Dataset for Checking Consistency between Unstructured Notes and Structured Tables in Electronic Health Records
[ "Yeonsu Kwon", "Jiho Kim", "Gyubok Lee", "Seongsu Bae", "Daeun Kyung", "Wonchul Cha", "Tom Pollard", "ALISTAIR JOHNSON", "Edward Choi" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.16341
[ "https://github.com/dustn1259/ehrcon" ]
https://huggingface.co/papers/2406.16341
7
11
2
9
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=5L05sLRIlQ
@inproceedings{ peng2024vasttrack, title={VastTrack: Vast Category Visual Object Tracking}, author={Liang Peng and Junyuan Gao and Xinran Liu and Weihong Li and Shaohua Dong and Zhipeng Zhang and Heng Fan and Libo Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=5L05sLRIlQ} }
In this paper, we propose a novel benchmark, named VastTrack, aiming to facilitate the development of general visual tracking via encompassing abundant classes and videos. VastTrack consists of a few attractive properties: (1) Vast Object Category. In particular, it covers targets from 2,115 categories, significantly surpassing object classes of existing popular benchmarks (e.g., GOT-10k with 563 classes and LaSOT with 70 categories). Through providing such vast object classes, we expect to learn more general object tracking. (2) Larger scale. Compared with current benchmarks, VastTrack provides 50,610 videos with 4.2 million frames, which makes it to date the largest dataset in term of the number of videos, and hence could benefit training even more powerful visual trackers in the deep learning era. (3) Rich Annotation. Besides conventional bounding box annotations, VastTrack also provides linguistic descriptions with more than 50K sentences for the videos. Such rich annotations of VastTrack enable the development of both vision-only and vision-language tracking. In order to ensure precise annotation, each frame in the videos is manually labeled with multi-stage of careful inspections and refinements. To understand performance of existing trackers and to provide baselines for future comparison, we extensively evaluate 25 representative trackers. The results, not surprisingly, display significant drops compared to those on current datasets due to lack of abundant categories and videos from diverse scenarios for training, and more efforts are urgently required to improve general visual tracking. Our VastTrack, the toolkit, and evaluation results are publicly available at https://github.com/HengLan/VastTrack.
VastTrack: Vast Category Visual Object Tracking
[ "Liang Peng", "Junyuan Gao", "Xinran Liu", "Weihong Li", "Shaohua Dong", "Zhipeng Zhang", "Heng Fan", "Libo Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2403.03493
[ "https://github.com/henglan/vasttrack" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=59E19c6yrN
@inproceedings{ abdelnabi2024cooperation, title={Cooperation, Competition, and Maliciousness: {LLM}-Stakeholders Interactive Negotiation}, author={Sahar Abdelnabi and Amr Gomaa and Sarath Sivaprasad and Lea Sch{\"o}nherr and Mario Fritz}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=59E19c6yrN} }
There is a growing interest in using Large Language Models (LLMs) in multi-agent systems to tackle interactive real-world tasks that require effective collaboration and assessing complex situations. Yet, we have a limited understanding of LLMs' communication and decision-making abilities in multi-agent setups. The fundamental task of negotiation spans many key features of communication, such as cooperation, competition, and manipulation potentials. Thus, we propose using scorable negotiation to evaluate LLMs. We create a testbed of complex multi-agent, multi-issue, and semantically rich negotiation games. To reach an agreement, agents must have strong arithmetic, inference, exploration, and planning capabilities while integrating them in a dynamic and multi-turn setup. We propose metrics to rigorously quantify agents' performance and alignment with the assigned role. We provide procedures to create new games and increase games' difficulty to have an evolving benchmark. Importantly, we evaluate critical safety aspects such as the interaction dynamics between agents influenced by greedy and adversarial players. Our benchmark is highly challenging; GPT-3.5 and small models mostly fail, and GPT-4 and SoTA large models (e.g., Llama-3 70b) still underperform in reaching agreement in non-cooperative and more difficult games.
Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation
[ "Sahar Abdelnabi", "Amr Gomaa", "Sarath Sivaprasad", "Lea Schönherr", "Mario Fritz" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2309.17234
[ "https://github.com/s-abdelnabi/llm-deliberation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4diKTLmg2y
@inproceedings{ monteiro2024repliqa, title={RepLi{QA}: A Question-Answering Dataset for Benchmarking {LLM}s on Unseen Reference Content}, author={Joao Monteiro and Pierre-Andre Noel and {\'E}tienne Marcotte and Sai Rajeswar and Valentina Zantedeschi and David Vazquez and Nicolas Chapados and Christopher Pal and Perouz Taslakian}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=4diKTLmg2y} }
Large Language Models (LLMs) are trained on vast amounts of data, most of which is automatically scraped from the internet. This data includes encyclopedic documents that harbor a vast amount of general knowledge (*e.g.*, Wikipedia) but also potentially overlap with benchmark datasets used for evaluating LLMs. Consequently, evaluating models on test splits that might have leaked into the training set is prone to misleading conclusions. To foster sound evaluation of language models, we introduce a new test dataset named RepLiQA, suited for question-answering and topic retrieval tasks. RepLiQA is a collection of five splits of test sets, four of which have not been released to the internet or exposed to LLM APIs prior to this publication. Each sample in RepLiQA comprises (1) a reference document crafted by a human annotator and depicting an imaginary scenario (*e.g.*, a news article) absent from the internet; (2) a question about the document’s topic; (3) a ground-truth answer derived directly from the information in the document; and (4) the paragraph extracted from the reference document containing the answer. As such, accurate answers can only be generated if a model can find relevant content within the provided document. We run a large-scale benchmark comprising several state-of-the-art LLMs to uncover differences in performance across models of various types and sizes in a context-conditional language modeling setting. Released splits of RepLiQA can be found here: https://huggingface.co/datasets/ServiceNow/repliqa.
RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content
[ "Joao Monteiro", "Pierre-Andre Noel", "Étienne Marcotte", "Sai Rajeswar", "Valentina Zantedeschi", "David Vazquez", "Nicolas Chapados", "Christopher Pal", "Perouz Taslakian" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.11811
[ "https://github.com/ServiceNow/repliqa" ]
https://huggingface.co/papers/2406.11811
5
16
1
9
[]
[ "ServiceNow/repliqa" ]
[]
[]
[ "ServiceNow/repliqa" ]
[]
1
null
https://openreview.net/forum?id=4Vhc7uPHjn
@inproceedings{ chen2024rextime, title={Re{XT}ime: A Benchmark Suite for Reasoning-Across-Time in Videos}, author={Jr-Jen Chen and Yu-Chien Liao and Hsi-Che Lin and Yu-Chu Yu and Yen-Chun Chen and Yu-Chiang Frank Wang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=4Vhc7uPHjn} }
We introduce ReXTime, a benchmark designed to rigorously test AI models' ability to perform temporal reasoning within video events. Specifically, ReXTime focuses on reasoning across time, i.e. human-like understanding when the question and its corresponding answer occur in different video segments. This form of reasoning, requiring advanced understanding of cause-and-effect relationships across video segments, poses significant challenges to even the frontier multimodal large language models. To facilitate this evaluation, we develop an automated pipeline for generating temporal reasoning question-answer pairs, significantly reducing the need for labor-intensive manual annotations. Our benchmark includes 921 carefully vetted validation samples and 2,143 test samples, each manually curated for accuracy and relevance. Evaluation results show that while frontier large language models outperform academic models, they still lag behind human performance by a significant 14.3\% accuracy gap. Additionally, our pipeline creates a training dataset of 9,695 machine generated samples without manual effort, which empirical studies suggest can enhance the across-time reasoning via fine-tuning.
ReXTime: A Benchmark Suite for Reasoning-Across-Time in Videos
[ "Jr-Jen Chen", "Yu-Chien Liao", "Hsi-Che Lin", "Yu-Chu Yu", "Yen-Chun Chen", "Yu-Chiang Frank Wang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.19392
[ "https://github.com/rextime/rextime" ]
https://huggingface.co/papers/2406.19392
0
0
0
6
[]
[ "ReXTime/ReXTime" ]
[]
[]
[ "ReXTime/ReXTime" ]
[]
1
null
https://openreview.net/forum?id=4S8agvKjle
@inproceedings{ ma2024agentboard, title={AgentBoard: An Analytical Evaluation Board of Multi-turn {LLM} Agents}, author={Chang Ma and Junlei Zhang and Zhihao Zhu and Cheng Yang and Yujiu Yang and Yaohui Jin and Zhenzhong Lan and Lingpeng Kong and Junxian He}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=4S8agvKjle} }
Evaluating large language models (LLMs) as general-purpose agents is essential for understanding their capabilities and facilitating their integration into practical applications. However, the evaluation process presents substantial challenges. A primary obstacle is the benchmarking of agent performance across diverse scenarios within a unified framework, especially in maintaining partially-observable environments and ensuring multi-round interactions. Moreover, current evaluation frameworks mostly focus on the final success rate, revealing few insights during the process and failing to provide a deep understanding of the model abilities. To address these challenges, we introduce AgentBoard, a pioneering comprehensive benchmark and accompanied open-source evaluation framework tailored to analytical evaluation of LLM agents. AgentBoard offers a fine-grained progress rate metric that captures incremental advancements as well as a comprehensive evaluation toolkit that features easy assessment of agents for multi-faceted analysis through interactive visualization. This not only sheds light on the capabilities and limitations of LLM agents but also propels the interpretability of their performance to the forefront. Ultimately, AgentBoard serves as a significant step towards demystifying agent behaviors and accelerating the development of stronger LLM agents.
AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents
[ "Chang Ma", "Junlei Zhang", "Zhihao Zhu", "Cheng Yang", "Yujiu Yang", "Yaohui Jin", "Zhenzhong Lan", "Lingpeng Kong", "Junxian He" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2401.13178
[ "https://github.com/hkust-nlp/agentboard" ]
https://huggingface.co/papers/2401.13178
1
0
2
9
[]
[ "hkust-nlp/agentboard" ]
[]
[]
[ "hkust-nlp/agentboard" ]
[]
1
null
https://openreview.net/forum?id=43s8hgGTOX
@inproceedings{ roush2024opendebateevidence, title={OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset}, author={Allen G Roush and Yusuf Shabazz and Arvind Balaji and Peter Zhang and Stefano Mezza and Markus Zhang and Sanjay Basu and Sriram Vishwanath and Ravid Shwartz-Ziv}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=43s8hgGTOX} }
We introduce OpenDebateEvidence, a comprehensive dataset for argument mining and summarization sourced from the American Competitive Debate community. This dataset includes over 3.5 million documents with rich metadata, making it one of the most extensive collections of debate evidence. OpenDebateEvidence captures the complexity of arguments in high school and college debates, pro- viding valuable resources for training and evaluation. Our extensive experiments demonstrate the efficacy of fine-tuning state-of-the-art large language models for argumentative abstractive summarization across various methods, models, and datasets. By providing this comprehensive resource, we aim to advance com- putational argumentation and support practical applications for debaters, edu- cators, and researchers. OpenDebateEvidence is publicly available to support further research and innovation in computational argumentation. Access it here: https://huggingface.co/datasets/Yusuf5/OpenCaselist.
OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset
[ "Allen G Roush", "Yusuf Shabazz", "Arvind Balaji", "Peter Zhang", "Stefano Mezza", "Markus Zhang", "Sanjay Basu", "Sriram Vishwanath", "Ravid Shwartz-Ziv" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.14657
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=4351SumKS9
@inproceedings{ kannen2024beyond, title={Beyond Aesthetics: Cultural Competence in Text-to-Image Models}, author={Nithish Kannen and Arif Ahmad and marco Andreetto and Vinodkumar Prabhakaran and Utsav Prabhu and Adji Bousso Dieng and Pushpak Bhattacharyya and Shachi Dave}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=4351SumKS9} }
Text-to-Image (T2I) models are being increasingly adopted in diverse global communities where they create visual representations of their unique cultures. Current T2I benchmarks primarily focus on faithfulness, aesthetics, and realism of generated images, overlooking the critical dimension of *cultural competence*. In this work, we introduce a framework to evaluate cultural competence of T2I models along two crucial dimensions: cultural awareness and cultural diversity, and present a scalable approach using a combination of structured knowledge bases and large language models to build a large dataset of cultural artifacts to enable this evaluation. In particular, we apply this approach to build CUBE (CUltural BEnchmark for Text-to-Image models), a first-of-its-kind benchmark to evaluate cultural competence of T2I models. CUBE covers cultural artifacts associated with 8 countries across different geo-cultural regions and along 3 concepts: cuisine, landmarks, and art. CUBE consists of 1) CUBE-1K, a set of high-quality prompts that enable the evaluation of cultural awareness, and 2) CUBE-CSpace, a larger dataset of cultural artifacts that serves as grounding to evaluate cultural diversity. We also introduce cultural diversity as a novel T2I evaluation component, leveraging quality-weighted Vendi score. Our evaluations reveal significant gaps in the cultural awareness of existing models across countries and provide valuable insights into the cultural diversity of T2I outputs for underspecified prompts. Our methodology is extendable to other cultural regions and concepts and can facilitate the development of T2I models that better cater to the global population.
Beyond Aesthetics: Cultural Competence in Text-to-Image Models
[ "Nithish Kannen", "Arif Ahmad", "marco Andreetto", "Vinodkumar Prabhakaran", "Utsav Prabhu", "Adji Bousso Dieng", "Pushpak Bhattacharyya", "Shachi Dave" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.06863
[ "https://github.com/google-research-datasets/cube" ]
https://huggingface.co/papers/2407.06863
1
1
0
8
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=42mqpIrA39
@inproceedings{ shah2024stackeval, title={StackEval: Benchmarking {LLM}s in Coding Assistance}, author={Nidhish Shah and Zulkuf Genc and Dogu Araci}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=42mqpIrA39} }
We present two comprehensive benchmarks to evaluate the performance of language models in coding assistance tasks, covering code writing, debugging, code review, and conceptual understanding. Our main contribution includes two curated datasets: StackEval, a large-scale benchmark derived from Stack Overflow questions, and StackUnseen, a dynamic benchmark featuring the most recent Stack Overflow content. These benchmarks offer novel insights into the capabilities and limitations of LLMs, particularly in handling new and emerging content. Additionally, we assess LLMs' proficiency as judges for coding tasks using a curated, human-annotated dataset, exploring their evaluation capabilities and potential biases, including whether they favor their own generated solutions. Our findings underscore the potential of these benchmarks to advance LLM development and application in coding assistance. To ensure reproducibility, we publicly share our datasets and evaluation code at https://github.com/ProsusAI/stack-eval.
StackEval: Benchmarking LLMs in Coding Assistance
[ "Nidhish Shah", "Zulkuf Genc", "Dogu Araci" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3qoQ6AolAz
@inproceedings{ tang2024mars, title={Mars: Situated Inductive Reasoning in an Open-World Environment}, author={Xiaojuan Tang and Jiaqi Li and Yitao Liang and Song-Chun Zhu and Muhan Zhang and Zilong Zheng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=3qoQ6AolAz} }
Large Language Models (LLMs) trained on massive corpora have shown remarkable success in knowledge-intensive tasks. Yet, most of them rely on pre-stored knowledge. Inducing new general knowledge from a specific environment and performing reasoning with the acquired knowledge—situated inductive reasoning, is crucial and challenging for machine intelligence. In this paper, we design Mars, an interactive environment devised for situated inductive reasoning. It introduces counter-commonsense game mechanisms by modifying terrain, survival setting and task dependency while adhering to certain principles. In Mars, agents need to actively interact with their surroundings, derive useful rules and perform decision-making tasks in specific contexts. We conduct experiments on various RL-based and LLM-based methods, finding that they all struggle on this challenging situated inductive reasoning benchmark. Furthermore, we explore Induction from Reflection, where we instruct agents to perform inductive reasoning from history trajectory. The superior performance underscores the importance of inductive reasoning in Mars. Through Mars, we aim to galvanize advancements in situated inductive reasoning and set the stage for developing the next generation of AI systems that can reason in an adaptive and context-sensitive way.
Mars: Situated Inductive Reasoning in an Open-World Environment
[ "Xiaojuan Tang", "Jiaqi Li", "Yitao Liang", "Song-Chun Zhu", "Muhan Zhang", "Zilong Zheng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.08126
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3qH8q02x0n
@inproceedings{ sankar2024indicvoicesr, title={IndicVoices-R: Unlocking a Massive Multilingual Multi-speaker Speech Corpus for Scaling Indian {TTS}}, author={Ashwin Sankar and Srija Anand and Praveen Srinivasa Varadhan and Sherry Thomas and Mehak Singal and Shridhar Kumar and Deovrat Mehendale and Aditi Krishana and Giri Raju and Mitesh M Khapra}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=3qH8q02x0n} }
Recent advancements in text-to-speech (TTS) synthesis show that large-scale models trained with extensive web data produce highly natural-sounding output. However, such data is scarce for Indian languages due to the lack of high-quality, manually subtitled data on platforms like LibriVox or YouTube. To address this gap, we enhance existing large-scale ASR datasets containing natural conversations collected in low-quality environments to generate high-quality TTS training data. Our pipeline leverages the cross-lingual generalization of denoising and speech enhancement models trained on English and applied to Indian languages. This results in IndicVoices-R (IV-R), the largest multilingual Indian TTS dataset derived from an ASR dataset, with 1,704 hours of high-quality speech from 10,496 speakers across 22 Indian languages. IV-R matches the quality of gold-standard TTS datasets like LJSpeech, LibriTTS, and IndicTTS. We also introduce the IV-R Benchmark, the first to assess zero-shot, few-shot, and many-shot speaker generalization capabilities of TTS models on Indian voices, ensuring diversity in age, gender, and style. We demonstrate that fine-tuning an English pre-trained model on a combined dataset of high-quality IndicTTS and our IV-R dataset results in better zero-shot speaker generalization compared to fine-tuning on the IndicTTS dataset alone. Further, our evaluation reveals limited zero-shot generalization for Indian voices in TTS models trained on prior datasets, which we improve by fine-tuning the model on our data containing diverse set of speakers across language families. We open-source code and data for all 22 official Indian languages.
IndicVoices-R: Unlocking a Massive Multilingual Multi-speaker Speech Corpus for Scaling Indian TTS
[ "Ashwin Sankar", "Srija Anand", "Praveen Srinivasa Varadhan", "Sherry Thomas", "Mehak Singal", "Shridhar Kumar", "Deovrat Mehendale", "Aditi Krishana", "Giri Raju", "Mitesh M Khapra" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.05356
[ "https://github.com/ai4bharat/indicvoices-r" ]
https://huggingface.co/papers/2409.05356
2
0
0
10
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=3ZjaXTPWiE
@inproceedings{ cheng2024nanobaselib, title={NanoBaseLib: A Multi-Task Benchmark Dataset for Nanopore Sequencing}, author={Guangzhao Cheng and Chengbo Fu and Lu Cheng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=3ZjaXTPWiE} }
Nanopore sequencing is the third-generation sequencing technology with capabilities of generating long-read sequences and directly measuring modifications on DNA/RNA molecules, which makes it ideal for biological applications such as human Telomere-to-Telomere (T2T) genome assembly, Ebola virus surveillance and COVID-19 mRNA vaccine development. However, accuracies of computational methods in various tasks of Nanopore sequencing data analysis are far from satisfactory. For instance, the base calling accuracy of Nanopore RNA sequencing is $\sim$90\%, while the aim is $\sim$99.9\%. This highlights an urgent need of contributions from the machine learning community. A bottleneck that prevents machine learning researchers from entering this field is the lack of a large integrated benchmark dataset. To this end, we present NanoBaseLib, a comprehensive multi-task benchmark dataset. It integrates 16 public datasets with over 30 million reads for four critical tasks in Nanopore data analysis. To facilitate method development, we have preprocessed all the raw data using a uniform workflow, stored all the intermediate results in uniform formats, analysed test datasets with various baseline methods for four benchmark tasks, and developed a software package to easily access these results. NanoBaseLib is available at https://nanobaselib.github.io.
NanoBaseLib: A Multi-Task Benchmark Dataset for Nanopore Sequencing
[ "Guangzhao Cheng", "Chengbo Fu", "Lu Cheng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3ZLuZ2l0aR
@inproceedings{ madan2024revisiting, title={Revisiting Few-Shot Object Detection with Vision-Language Models}, author={Anish Madan and Neehar Peri and Shu Kong and Deva Ramanan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=3ZLuZ2l0aR} }
The era of vision-language models (VLMs) trained on web-scale datasets challenges conventional formulations of “open-world" perception. In this work, we revisit the task of few-shot object detection (FSOD) in the context of recent foundational VLMs. First, we point out that zero-shot predictions from VLMs such as GroundingDINO significantly outperform state-of-the-art few-shot detectors (48 vs. 33 AP) on COCO. Despite their strong zero-shot performance, such foundation models may still be sub-optimal. For example, trucks on the web may be defined differently from trucks for a target applications such as autonomous vehicle perception. We argue that the task of few-shot recognition can be reformulated as aligning foundation models to target concepts using a few examples. Interestingly, such examples can be multi-modal, using both text and visual cues, mimicking instructions that are often given to human annotators when defining a target concept of interest. Concretely, we propose Foundational FSOD, a new benchmark protocol that evaluates detectors pre-trained on any external data and fine-tuned on multi-modal (text and visual) K-shot examples per target class. We repurpose nuImages for Foundational FSOD, benchmark several popular open-source VLMs, and provide an empirical analysis of state-of-the-art methods. Lastly, we discuss our recent CVPR 2024 Foundational FSOD competition and share insights from the community. Notably, the winning team significantly outperforms our baseline by 23.3 mAP!
Revisiting Few-Shot Object Detection with Vision-Language Models
[ "Anish Madan", "Neehar Peri", "Shu Kong", "Deva Ramanan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2312.14494
[ "https://github.com/anishmadan23/foundational_fsod" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3Yrfx7oYMF
@inproceedings{ li2024instruction, title={Instruction Embedding: Latent Representations of Instructions Towards Task Identification}, author={Yiwei Li and Jiayi Shi and Shaoxiong Feng and Peiwen Yuan and Xinglin Wang and Boyuan Pan and Heda Wang and Yao Hu and Kan Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=3Yrfx7oYMF} }
Instruction data is crucial for improving the capability of Large Language Models (LLMs) to align with human-level performance. Recent research LIMA demonstrates that alignment is essentially a process where the model adapts instructions' interaction style or format to solve various tasks, leveraging pre-trained knowledge and skills. Therefore, for instructional data, the most important aspect is the task it represents, rather than the specific semantics and knowledge information. The latent representations of instructions play roles for some instruction-related tasks like data selection and demonstrations retrieval. However, they are always derived from text embeddings, encompass overall semantic information that influences the representation of task categories. In this work, we introduce a new concept, instruction embedding, and construct Instruction Embedding Benchmark (IEB) for its training and evaluation. Then, we propose a baseline Prompt-based Instruction Embedding (PIE) method to make the representations more attention on tasks. The evaluation of PIE, alongside other embedding methods on IEB with two designed tasks, demonstrates its superior performance in accurately identifying task categories. Moreover, the application of instruction embeddings in four downstream tasks showcases its effectiveness and suitability for instruction-related tasks.
Instruction Embedding: Latent Representations of Instructions Towards Task Identification
[ "Yiwei Li", "Jiayi Shi", "Shaoxiong Feng", "Peiwen Yuan", "Xinglin Wang", "Boyuan Pan", "Heda Wang", "Yao Hu", "Kan Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.19680
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=3G1ZDXOI4f
@inproceedings{ wu2024longvideobench, title={LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding}, author={Haoning Wu and Dongxu Li and Bei Chen and Junnan Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=3G1ZDXOI4f} }
Large multimodal models (LMMs) are processing increasingly longer and richer inputs. Albeit the progress, few public benchmark is available to measure such development. To mitigate this gap, we introduce LongVideoBench, a question-answering benchmark that features video-language interleaved inputs up to an hour long. Our benchmark includes 3,763 varying-length web-collected videos with their subtitles across diverse themes, designed to comprehensively evaluate LMMs on long-term multimodal understanding. To achieve this, we interpret the primary challenge as to accurately retrieve and reason over detailed multimodal information from long inputs. As such, we formulate a novel video question-answering task termed referring reasoning. Specifically, as part of the question, it contains a referring query that references related video contexts, called referred context. The model is then required to reason over relevant video details from the referred context. Following the paradigm of referring reasoning, we curate 6,678 human-annotated multiple-choice questions in 17 fine-grained categories, establishing one of the most comprehensive benchmarks for long-form video understanding. Evaluations suggest that the LongVideoBench presents significant challenges even for the most advanced proprietary models (e.g. GPT-4o, Gemini-1.5-Pro), while their open-source counterparts show an even larger performance gap. In addition, our results indicate that model performance on the benchmark improves only when they are capable of processing more frames, positioning LongVideoBench as a valuable benchmark for evaluating future-generation long-context LMMs.
LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding
[ "Haoning Wu", "Dongxu Li", "Bei Chen", "Junnan Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.15754
[ "https://github.com/longvideobench/longvideobench" ]
https://huggingface.co/papers/2407.15754
2
19
3
4
[]
[ "longvideobench/LongVideoBench" ]
[ "longvideobench/LongVideoBench" ]
[]
[ "longvideobench/LongVideoBench" ]
[ "longvideobench/LongVideoBench" ]
1
null
https://openreview.net/forum?id=3814z76JNM
@inproceedings{ haider2024networkgym, title={NetworkGym: Reinforcement Learning Environments for Multi-Access Traffic Management in Network Simulation}, author={Momin Haider and Ming Yin and Menglei Zhang and Arpit Gupta and Jing Zhu and Yu-Xiang Wang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=3814z76JNM} }
Mobile devices such as smartphones, laptops, and tablets can often connect to multiple access networks (e.g., Wi-Fi, LTE, and 5G) simultaneously. Recent advancements facilitate seamless integration of these connections below the transport layer, enhancing the experience for apps that lack inherent multi-path support. This optimization hinges on dynamically determining the traffic distribution across networks for each device, a process referred to as multi-access traffic splitting. This paper introduces NetworkGym, a high-fidelity network environment simulator that facilitates generating multiple network traffic flows and multi-access traffic splitting. This simulator facilitates training and evaluating different RL-based solutions for the multi-access traffic splitting problem. Our initial explorations demonstrate that the majority of existing state-of-the-art offline RL algorithms (e.g. CQL) fail to outperform certain hand-crafted heuristic policies on average. This illustrates the urgent need to evaluate offline RL algorithms against a broader range of benchmarks, rather than relying solely on popular ones such as D4RL. We also propose an extension to the TD3+BC algorithm, named Pessimistic TD3 (PTD3), and demonstrate that it outperforms many state-of-the-art offline RL algorithms. PTD3's behavioral constraint mechanism, which relies on value-function pessimism, is theoretically motivated and relatively simple to implement. We open source our code and offline datasets at github.com/hmomin/networkgym.
NetworkGym: Reinforcement Learning Environments for Multi-Access Traffic Management in Network Simulation
[ "Momin Haider", "Ming Yin", "Menglei Zhang", "Arpit Gupta", "Jing Zhu", "Yu-Xiang Wang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.04138
[ "https://github.com/hmomin/networkgym" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=30XanJanJP
@inproceedings{ huang2024effibench, title={EffiBench: Benchmarking the Efficiency of Automatically Generated Code}, author={Dong HUANG and Yuhao QING and Weiyi Shang and Heming Cui and Jie Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=30XanJanJP} }
Code generation models have increasingly become integral to aiding software development. Although current research has thoroughly examined the correctness of the code produced by code generation models, a vital aspect that plays a pivotal role in green computing and sustainability efforts — the efficiency of the generated code — has often been neglected. This paper presents Effibench, a benchmark with 1,000 efficiency-critical coding problems to assess the efficiency of code generated by code generation models. EffiBench contains a diverse set of LeetCode coding problems. Each problem is paired with an executable human-written canonical solution, which obtains the SOTA efficiency on the LeetCode solution leaderboard. With EffiBench, we empirically examine the ability of 42 large language models (35 open-source and 7 closed-source) to generate efficient code. Our evaluation results demonstrate that the efficiency of the code generated by LLMs is generally worse than the efficiency of human-written canonical solutions. For example, GPT-4 generated code has an average \textbf{3.12} times execution time that of the human-written canonical solutions. In the most extreme cases, the execution time and total memory usage of GPT-4 code are \textbf{13.89} and \textbf{43.92} times that of the canonical solutions. The source code of EffiBench is released on https://github.com/huangd1999/EffiBench. We also provide the LeaderBoard in https://huggingface.co/spaces/EffiBench/effibench-leaderboard.
EffiBench: Benchmarking the Efficiency of Automatically Generated Code
[ "Dong HUANG", "Yuhao QING", "Weiyi Shang", "Heming Cui", "Jie Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.02037
[ "https://github.com/huangd1999/EffiBench" ]
https://huggingface.co/papers/2402.02037
0
0
0
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=2vs1fIAy3X
@inproceedings{ du2024constrained, title={Constrained Human-{AI} Cooperation: An Inclusive Embodied Social Intelligence Challenge}, author={Weihua Du and Qiushi Lyu and Jiaming Shan and Zhenting Qi and Hongxin Zhang and Sunli Chen and Andi Peng and Tianmin Shu and Kwonjoon Lee and Behzad Dariush and Chuang Gan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=2vs1fIAy3X} }
We introduce Constrained Human-AI Cooperation (CHAIC), an inclusive embodied social intelligence challenge designed to test social perception and cooperation in embodied agents. In CHAIC, the goal is for an embodied agent equipped with egocentric observations to assist a human who may be operating under physical constraints—e.g., unable to reach high places or confined to a wheelchair—in performing common household or outdoor tasks as efficiently as possible. To achieve this, a successful helper must: (1) infer the human's intents and constraints by following the human and observing their behaviors (social perception), and (2) make a cooperative plan tailored to the human partner to solve the task as quickly as possible, working together as a team (cooperative planning). To benchmark this challenge, we create four new agents with real physical constraints and eight long-horizon tasks featuring both indoor and outdoor scenes with various constraints, emergency events, and potential risks. We benchmark planning- and learning-based baselines on the challenge and introduce a new method that leverages large language models and behavior modeling. Empirical evaluations demonstrate the effectiveness of our benchmark in enabling systematic assessment of key aspects of machine social intelligence. Our benchmark and code are publicly available at https://github.com/UMass-Foundation-Model/CHAIC.
Constrained Human-AI Cooperation: An Inclusive Embodied Social Intelligence Challenge
[ "Weihua Du", "Qiushi Lyu", "Jiaming Shan", "Zhenting Qi", "Hongxin Zhang", "Sunli Chen", "Andi Peng", "Tianmin Shu", "Kwonjoon Lee", "Behzad Dariush", "Chuang Gan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.01796
[ "https://github.com/umass-foundation-model/chaic" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=2myGfVgfva
@inproceedings{ ju2024miradata, title={MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions}, author={Xuan Ju and Yiming Gao and Zhaoyang Zhang and Ziyang Yuan and Xintao Wang and Ailing Zeng and Yu Xiong and Qiang Xu and Ying Shan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=2myGfVgfva} }
Sora's high-motion intensity and long consistent videos have significantly impacted the field of video generation, attracting unprecedented attention. However, existing publicly available datasets are inadequate for generating Sora-like videos, as they mainly contain short videos with low motion intensity and brief captions. To address these issues, we propose MiraData, a high-quality video dataset that surpasses previous ones in video duration, caption detail, motion strength, and visual quality. We curate MiraData from diverse, manually selected sources and meticulously process the data to obtain semantically consistent clips. GPT-4V is employed to annotate structured captions, providing detailed descriptions from four different perspectives along with a summarized dense caption. To better assess temporal consistency and motion intensity in video generation, we introduce MiraBench, which enhances existing benchmarks by adding 3D consistency and tracking-based motion strength metrics. MiraBench includes 150 evaluation prompts and 17 metrics covering temporal consistency, motion strength, 3D consistency, visual quality, text-video alignment, and distribution similarity. To demonstrate the utility and effectiveness of MiraData, we conduct experiments using our DiT-based video generation model, MiraDiT. The experimental results on MiraBench demonstrate the superiority of MiraData, especially in motion strength.
MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions
[ "Xuan Ju", "Yiming Gao", "Zhaoyang Zhang", "Ziyang Yuan", "Xintao Wang", "Ailing Zeng", "Yu Xiong", "Qiang Xu", "Ying Shan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.06358
[ "" ]
https://huggingface.co/papers/2407.06358
3
18
1
9
[]
[ "TencentARC/MiraData" ]
[]
[]
[ "TencentARC/MiraData" ]
[]
1
null
https://openreview.net/forum?id=2kTX7K6osK
@inproceedings{ witter2024benchmarking, title={Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm}, author={R. Teal Witter and Christopher Musco}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=2kTX7K6osK} }
Estimating the effect of treatments from natural experiments, where treatments are pre-assigned, is an important and well-studied problem. We introduce a novel natural experiment dataset obtained from an early childhood literacy nonprofit. Surprisingly, applying over 20 established estimators to the dataset produces inconsistent results in evaluating the nonprofits efficacy. To address this, we create a benchmark to evaluate estimator accuracy using synthetic outcomes, whose design was guided by domain experts. The benchmark extensively explores performance as real world conditions like sample size, treatment correlation, and propensity score accuracy vary. Based on our benchmark, we observe that the class of doubly robust treatment effect estimators, which are based on simple and intuitive regression adjustment, generally outperform other more complicated estimators by orders of magnitude. To better support our theoretical understanding of doubly robust estimators, we derive a closed form expression for the variance of any such estimator that uses dataset splitting to obtain an unbiased estimate. This expression motivates the design of a new doubly robust estimator that uses a novel loss function when fitting functions for regression adjustment. We release the dataset and benchmark in a Python package; the package is built in a modular way to facilitate new datasets and estimators. https://github.com/rtealwitter/naturalexperiments
Benchmarking Estimators for Natural Experiments: A Novel Dataset and a Doubly Robust Algorithm
[ "R. Teal Witter", "Christopher Musco" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.04500
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=2dw3zQ3nk9
@inproceedings{ yang2024vript, title={Vript: A Video Is Worth Thousands of Words}, author={Dongjie Yang and Suyuan Huang and Chengqiang Lu and Xiaodong Han and Haoxin Zhang and Yan Gao and Yao Hu and hai zhao}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=2dw3zQ3nk9} }
Advancements in multimodal learning, particularly in video understanding and generation, require high-quality video-text datasets for improved model performance. Vript addresses this issue with a meticulously annotated corpus of 12K high-resolution videos, offering detailed, dense, and script-like captions for over 420K clips. Each clip has a caption of ~145 words, which is over 10x longer than most video-text datasets. Unlike captions only documenting static content in previous datasets, we enhance video captioning to video scripting by documenting not just the content, but also the camera operations, which include the shot types (medium shot, close-up, etc) and camera movements (panning, tilting, etc). By utilizing the Vript, we explore three training paradigms of aligning more text with the video modality rather than clip-caption pairs. This results in Vriptor, a top-performing video captioning model among open-source models, comparable to GPT-4V in performance. Vriptor is also a powerful model capable of end-to-end generation of dense and detailed captions for long videos. Moreover, we introduce Vript-Hard, a benchmark consisting of three video understanding tasks that are more challenging than existing benchmarks: Vript-HAL is the first benchmark evaluating action and object hallucinations in video LLMs, Vript-RR combines reasoning with retrieval resolving question ambiguity in long-video QAs, and Vript-ERO is a new task to evaluate the temporal understanding of events in long videos rather than actions in short videos in previous works. All code, models, and datasets are available in https://github.com/mutonix/Vript.
Vript: A Video Is Worth Thousands of Words
[ "Dongjie Yang", "Suyuan Huang", "Chengqiang Lu", "Xiaodong Han", "Haoxin Zhang", "Yan Gao", "Yao Hu", "hai zhao" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.06040
[ "https://github.com/mutonix/vript" ]
https://huggingface.co/papers/2406.06040
3
24
0
8
[]
[ "Mutonix/Vript", "Mutonix/Vript_Chinese", "Mutonix/Vript-HAL", "Mutonix/Vript-RR", "Mutonix/Vript_Multilingual", "Mutonix/Vript-ERO" ]
[ "Mutonix/Vriptor-stllm" ]
[]
[ "Mutonix/Vript", "Mutonix/Vript_Chinese", "Mutonix/Vript-HAL", "Mutonix/Vript-RR", "Mutonix/Vript_Multilingual", "Mutonix/Vript-ERO" ]
[ "Mutonix/Vriptor-stllm" ]
1
null
https://openreview.net/forum?id=2WbuKAfOxP
@inproceedings{ enevoldsen2024the, title={The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding}, author={Kenneth Enevoldsen and M{\'a}rton Kardos and Niklas Muennighoff and Kristoffer Nielbo}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=2WbuKAfOxP} }
The evaluation of English text embeddings has transitioned from evaluating a handful of datasets to broad coverage across many tasks through benchmarks such as MTEB. However, this is not the case for multilingual text embeddings due to a lack of available benchmarks. To address this problem, we introduce the Scandinavian Embedding Benchmark (SEB). SEB is a comprehensive framework that enables text embedding evaluation for Scandinavian languages across 24 tasks, 10 subtasks, and 4 task categories. Building on SEB, we evaluate more than 26 models, uncovering significant performance disparities between public and commercial solutions not previously captured by MTEB. We open-source SEB and integrate it with MTEB, thus bridging the text embedding evaluation gap for Scandinavian languages.
The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding
[ "Kenneth Enevoldsen", "Márton Kardos", "Niklas Muennighoff", "Kristoffer Nielbo" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.02396
[ "https://github.com/embeddings-benchmark/mteb" ]
https://huggingface.co/papers/2406.02396
3
0
0
4
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=1s8l1tnTXW
@inproceedings{ saeed2024muharaf, title={Muharaf: Manuscripts of Handwritten Arabic Dataset for Cursive Text Recognition}, author={Mehreen Saeed and Adrian Chan and Anupam Mijar and joseph Moukarzel and Gerges Habchi and Carlos Younes and amin elias and Chau-Wai Wong and Akram Khater}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=1s8l1tnTXW} }
We present the Manuscripts of Handwritten Arabic (Muharaf) dataset, which is a machine learning dataset consisting of more than 1,600 historic handwritten page images transcribed by experts in archival Arabic. Each document image is accompanied by spatial polygonal coordinates of its text lines as well as basic page elements. This dataset was compiled to advance the state of the art in handwritten text recognition (HTR), not only for Arabic manuscripts but also for cursive text in general. The Muharaf dataset includes diverse handwriting styles and a wide range of document types, including personal letters, diaries, notes, poems, church records, and legal correspondences. In this paper, we describe the data acquisition pipeline, notable dataset features, and statistics. We also provide a preliminary baseline result achieved by training convolutional neural networks using this data.
Muharaf: Manuscripts of Handwritten Arabic Dataset for Cursive Text Recognition
[ "Mehreen Saeed", "Adrian Chan", "Anupam Mijar", "joseph Moukarzel", "Gerges Habchi", "Carlos Younes", "amin elias", "Chau-Wai Wong", "Akram Khater" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09630
[ "https://github.com/mehreenmehreen/start_follow_read_arabic" ]
https://huggingface.co/papers/2406.09630
1
2
0
9
[]
[ "aamijar/muharaf-public" ]
[]
[]
[ "aamijar/muharaf-public" ]
[]
1
null
https://openreview.net/forum?id=1q3b2Z95ec
@inproceedings{ mertens2024findingemo, title={FindingEmo: An Image Dataset for Emotion Recognition in the Wild}, author={Laurent Mertens and Elahe Yargholi and Hans Op de Beeck and Jan Van den Stock and Joost Vennekens}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=1q3b2Z95ec} }
We introduce FindingEmo, a new image dataset containing annotations for 25k images, specifically tailored to Emotion Recognition. Contrary to existing datasets, it focuses on complex scenes depicting multiple people in various naturalistic, social settings, with images being annotated as a whole, thereby going beyond the traditional focus on faces or single individuals. Annotated dimensions include Valence, Arousal and Emotion label, with annotations gathered using Prolific. Together with the annotations, we release the list of URLs pointing to the original images, as well as all associated source code.
FindingEmo: An Image Dataset for Emotion Recognition in the Wild
[ "Laurent Mertens", "Elahe Yargholi", "Hans Op de Beeck", "Jan Van den Stock", "Joost Vennekens" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.01355
[ "" ]
https://huggingface.co/papers/2402.01355
0
1
0
5
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=1nqfIQIQBf
@inproceedings{ yukhymenko2024a, title={A Synthetic Dataset for Personal Attribute Inference}, author={Hanna Yukhymenko and Robin Staab and Mark Vero and Martin Vechev}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=1nqfIQIQBf} }
Recently powerful Large Language Models (LLMs) have become easily accessible to hundreds of millions of users world-wide. However, their strong capabilities and vast world knowledge do not come without associated privacy risks. In this work, we focus on the emerging privacy threat LLMs pose – the ability to accurately infer personal information from online texts. Despite the growing importance of LLM-based author profiling, research in this area has been hampered by a lack of suitable public datasets, largely due to ethical and privacy concerns associated with real personal data. We take two steps to address this problem: (i) we construct a simulation framework for the popular social media platform Reddit using LLM agents seeded with synthetic personal profiles; (ii) using this framework, we generate *SynthPAI*, a diverse synthetic dataset of over 7800 comments manually labeled for personal attributes. We validate our dataset with a human study showing that humans barely outperform random guessing on the task of distinguishing our synthetic comments from real ones. Further, we verify that our dataset enables meaningful personal attribute inference research by showing across 18 state-of-the-art LLMs that our synthetic comments allow us to draw the same conclusions as real-world data. Combined, our experimental results, dataset and pipeline form a strong basis for future privacy-preserving research geared towards understanding and mitigating inference-based privacy threats that LLMs pose.
A Synthetic Dataset for Personal Attribute Inference
[ "Hanna Yukhymenko", "Robin Staab", "Mark Vero", "Martin Vechev" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.07217
[ "https://github.com/eth-sri/synthpai" ]
https://huggingface.co/papers/2406.07217
1
1
0
4
[]
[ "RobinSta/SynthPAI" ]
[ "hannayukhymenko/synthpai_inference" ]
[]
[ "RobinSta/SynthPAI" ]
[ "hannayukhymenko/synthpai_inference" ]
1
null
https://openreview.net/forum?id=1FVe59t3LX
@inproceedings{ zhang2024dtgb, title={{DTGB}: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs}, author={Jiasheng Zhang and Jialin Chen and Menglin Yang and Aosong Feng and Shuang Liang and Jie Shao and Rex Ying}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=1FVe59t3LX} }
Dynamic text-attributed graphs (DyTAGs) are prevalent in various real-world scenarios, where each node and edge are associated with text descriptions, and both the graph structure and text descriptions evolve over time. Despite their broad applicability, there is a notable scarcity of benchmark datasets tailored to DyTAGs, which hinders the potential advancement in many research fields. To address this gap, we introduce Dynamic Text-attributed Graph Benchmark (DTGB), a collection of large-scale, time-evolving graphs from diverse domains, with nodes and edges enriched by dynamically changing text attributes and categories. To facilitate the use of DTGB, we design standardized evaluation procedures based on four real-world use cases: future link prediction, destination node retrieval, edge classification, and textual relation generation. These tasks require models to understand both dynamic graph structures and natural language, highlighting the unique challenges posed by DyTAGs. Moreover, we conduct extensive benchmark experiments on DTGB, evaluating 7 popular dynamic graph learning algorithms and their variants of adapting to text attributes with LLM embeddings, along with 6 powerful large language models (LLMs). Our results show the limitations of existing models in handling DyTAGs. Our analysis also demonstrates the utility of DTGB in investigating the incorporation of structural and textual dynamics. The proposed DTGB fosters research on DyTAGs and their broad applications. It offers a comprehensive benchmark for evaluating and advancing models to handle the interplay between dynamic graph structures and natural language. The dataset and source code are available at https://github.com/zjs123/DTGB.
DTGB: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs
[ "Jiasheng Zhang", "Jialin Chen", "Menglin Yang", "Aosong Feng", "Shuang Liang", "Jie Shao", "Rex Ying" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.12072
[ "https://github.com/zjs123/DTGB" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=15PS30UOUp
@inproceedings{ bountos2024kuro, title={Kuro Siwo: 33 billion \$m{\textasciicircum}2\$ under the water. A global multi-temporal satellite dataset for rapid flood mapping}, author={Nikolaos Ioannis Bountos and Maria Sdraka and Angelos Zavras and Andreas Karavias and Ilektra Karasante and Themistocles Herekakis and Angeliki Thanasou and Dimitrios Michail and Ioannis Papoutsis}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=15PS30UOUp} }
Global flash floods, exacerbated by climate change, pose severe threats to human life, infrastructure, and the environment. Recent catastrophic events in Pakistan and New Zealand underscore the urgent need for precise flood mapping to guide restoration efforts, understand vulnerabilities, and prepare for future occurrences. While Synthetic Aperture Radar (SAR) remote sensing offers day-and-night, all-weather imaging capabilities, its application in deep learning for flood segmentation is limited by the lack of large annotated datasets. To address this, we introduce Kuro Siwo, a manually annotated multi-temporal dataset, spanning 43 flood events globally. Our dataset maps more than 338 billion $m^2$ of land, with 33 billion designated as either flooded areas or permanent water bodies. Kuro Siwo includes a highly processed product optimized for flash flood mapping based on SAR Ground Range Detected, and a primal SAR Single Look Complex product with minimal preprocessing, designed to promote research on the exploitation of both the phase and amplitude information and to offer maximum flexibility for downstream task preprocessing. To leverage advances in large scale self-supervised pretraining methods for remote sensing data, we augment Kuro Siwo with a large unlabeled set of SAR samples. Finally, we provide an extensive benchmark, namely BlackBench, offering strong baselines for a diverse set of flood events globally. All data and code are published in our Github repository: https://github.com/Orion-AI-Lab/KuroSiwo.
Kuro Siwo: 33 billion m^2 under the water. A global multi-temporal satellite dataset for rapid flood mapping
[ "Nikolaos Ioannis Bountos", "Maria Sdraka", "Angelos Zavras", "Andreas Karavias", "Ilektra Karasante", "Themistocles Herekakis", "Angeliki Thanasou", "Dimitrios Michail", "Ioannis Papoutsis" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "https://github.com/Orion-AI-Lab/KuroSiwo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=0mRouJElbZ
@inproceedings{ qiu2024progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=0mRouJElbZ} }
Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. We introduce **progress alignment** as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. To empower research in progress alignment, we introduce [**ProgressGym**](https://github.com/PKU-Alignment/ProgressGym), an experimental framework allowing the learning of moral progress mechanics from history, in order to facilitate future progress in real-world moral decisions. Leveraging 9 centuries of historical text and 18 [historical LLMs](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa), ProgressGym enables codification of real-world progress alignment challenges into concrete benchmarks. Specifically, we introduce three core challenges: tracking evolving values (PG-Follow), preemptively anticipating moral progress (PG-Predict), and regulating the feedback loop between human and AI value shifts (PG-Coevolve). Alignment methods without a temporal dimension are inapplicable to these tasks. In response, we present *lifelong* and *extrapolative* algorithms as baseline methods of progress alignment, and build an [open leaderboard](https://huggingface.co/spaces/PKU-Alignment/ProgressGym-LeaderBoard) soliciting novel algorithms and challenges.
ProgressGym: Alignment with a Millennium of Moral Progress
[ "Tianyi Qiu", "Yang Zhang", "Xuchuan Huang", "Jasmine Xinze Li", "Jiaming Ji", "Yaodong Yang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.20087
[ "https://github.com/pku-alignment/progressgym" ]
https://huggingface.co/papers/2406.20087
2
3
2
6
[ "PKU-Alignment/ProgressGym-HistLlama3-70B-C017-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C018-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C019-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C020-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C021-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C016-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C013-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C014-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C015-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C017-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C019-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C018-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C020-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C021-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C014-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C015-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C013-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C016-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C014-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C018-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C019-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C017-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C021-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C020-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C015-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C016-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C018-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C013-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C017-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C013-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C019-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C014-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C020-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C015-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C021-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C016-instruct-v0.1", "RichardErkhov/PKU-Alignment_-_ProgressGym-HistLlama3-8B-C021-instruct-v0.2-gguf" ]
[ "PKU-Alignment/ProgressGym-MoralEvals", "PKU-Alignment/ProgressGym-TimelessQA", "PKU-Alignment/ProgressGym-HistText" ]
[]
[ "PKU-Alignment/ProgressGym-HistLlama3-70B-C017-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C018-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C019-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C020-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C021-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C016-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C013-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C014-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C015-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C017-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C019-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C018-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C020-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C021-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C014-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C015-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C013-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C016-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C014-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C018-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C019-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C017-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C021-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-8B-C020-pretrain-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C015-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C016-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C018-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-8B-C013-instruct-v0.2", "PKU-Alignment/ProgressGym-HistLlama3-70B-C017-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C013-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C019-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C014-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C020-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C015-instruct-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C021-pretrain-v0.1", "PKU-Alignment/ProgressGym-HistLlama3-70B-C016-instruct-v0.1", "RichardErkhov/PKU-Alignment_-_ProgressGym-HistLlama3-8B-C021-instruct-v0.2-gguf" ]
[ "PKU-Alignment/ProgressGym-MoralEvals", "PKU-Alignment/ProgressGym-TimelessQA", "PKU-Alignment/ProgressGym-HistText" ]
[]
1
null
https://openreview.net/forum?id=0T8xRFrScB
@inproceedings{ melistas2024benchmarking, title={Benchmarking Counterfactual Image Generation}, author={Thomas Melistas and Nikos Spyrou and Nefeli Gkouti and Pedro Sanchez and Athanasios Vlontzos and Yannis Panagakis and Giorgos Papanastasiou and Sotirios A. Tsaftaris}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=0T8xRFrScB} }
Generative AI has revolutionised visual content editing, empowering users to effortlessly modify images and videos. However, not all edits are equal. To perform realistic edits in domains such as natural image or medical imaging, modifications must respect causal relationships inherent to the data generation process. Such image editing falls into the counterfactual image generation regime. Evaluating counterfactual image generation is substantially complex: not only it lacks observable ground truths, but also requires adherence to causal constraints. Although several counterfactual image generation methods and evaluation metrics exist a comprehensive comparison within a unified setting is lacking. We present a comparison framework to thoroughly benchmark counterfactual image generation methods. We evaluate the performance of three conditional image generation model families developed within the Structural Causal Model (SCM) framework. We incorporate several metrics that assess diverse aspects of counterfactuals, such as composition, effectiveness, minimality of interventions, and image realism. We integrate all models that have been used for the task at hand and expand them to novel datasets and causal graphs, demonstrating the superiority of Hierarchical VAEs across most datasets and metrics. Our framework is implemented in a user-friendly Python package that can be extended to incorporate additional SCMs, causal methods, generative models, and datasets for the community to build on. Code: https://github.com/gulnazaki/counterfactual-benchmark.
Benchmarking Counterfactual Image Generation
[ "Thomas Melistas", "Nikos Spyrou", "Nefeli Gkouti", "Pedro Sanchez", "Athanasios Vlontzos", "Yannis Panagakis", "Giorgos Papanastasiou", "Sotirios A. Tsaftaris" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2403.20287
[ "https://github.com/gulnazaki/counterfactual-benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=0NQzQVu9tY
@inproceedings{ ortiz2024dmcvb, title={{DMC}-{VB}: A Benchmark for Representation Learning for Control with Visual Distractors}, author={Joseph Ortiz and Antoine Dedieu and Wolfgang Lehrach and J Swaroop Guntupalli and Carter Wendelken and Ahmad Humayun and Sivaramakrishnan Swaminathan and Guangyao Zhou and Miguel Lazaro-Gredilla and Kevin Patrick Murphy}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=0NQzQVu9tY} }
Learning from previously collected data via behavioral cloning or offline reinforcement learning (RL) is a powerful recipe for scaling generalist agents by avoiding the need for expensive online learning. Despite strong generalization in some respects, agents are often remarkably brittle to minor visual variations in control-irrelevant factors such as the background or camera viewpoint. In this paper, we present theDeepMind Control Visual Benchmark (DMC-VB), a dataset collected in the DeepMind Control Suite to evaluate the robustness of offline RL agents for solving continuous control tasks from visual input in the presence of visual distractors. In contrast to prior works, our dataset (a) combines locomotion and navigation tasks of varying difficulties, (b) includes static and dynamic visual variations, (c) considers data generated by policies with different skill levels, (d) systematically returns pairs of state and pixel observation, (e) is an order of magnitude larger, and (f) includes tasks with hidden goals. Accompanying our dataset, we propose three benchmarks to evaluate representation learning methods for pretraining, and carry out experiments on several recently proposed methods. First, we find that pretrained representations do not help policy learning on DMC-VB, and we highlight a large representation gap between policies learned on pixel observations and on states. Second, we demonstrate when expert data is limited, policy learning can benefit from representations pretrained on (a) suboptimal data, and (b) tasks with stochastic hidden goals. Our dataset and benchmark code to train and evaluate agents are available at https://github.com/google-deepmind/dmc_vision_benchmark.
DMC-VB: A Benchmark for Representation Learning for Control with Visual Distractors
[ "Joseph Ortiz", "Antoine Dedieu", "Wolfgang Lehrach", "J Swaroop Guntupalli", "Carter Wendelken", "Ahmad Humayun", "Sivaramakrishnan Swaminathan", "Guangyao Zhou", "Miguel Lazaro-Gredilla", "Kevin Patrick Murphy" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.18330
[ "https://github.com/google-deepmind/dmc_vision_benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=0Gmi8TkUC7
@inproceedings{ jiang2024genai, title={Gen{AI} Arena: An Open Evaluation Platform for Generative Models}, author={Dongfu Jiang and Max Ku and Tianle Li and Yuansheng Ni and Shizhuo Sun and Rongqi Fan and Wenhu Chen}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=0Gmi8TkUC7} }
Generative AI has made remarkable strides to revolutionize fields such as image and video generation. These advancements are driven by innovative algorithms, architecture, and data. However, the rapid proliferation of generative models has highlighted a critical gap: the absence of trustworthy evaluation metrics. Current automatic assessments such as FID, CLIP, FVD, etc often fail to capture the nuanced quality and user satisfaction associated with generative outputs. This paper proposes an open platform GenAI-Arena to evaluate different image and video generative models, where users can actively participate in evaluating these models. By leveraging collective user feedback and votes, GenAI-Arena aims to provide a more democratic and accurate measure of model performance. It covers three tasks of text-to-image generation, text-to-video generation, and image editing respectively. Currently, we cover a total of 35 open-source generative models. GenAI-Arena has been operating for seven months, amassing over 9000 votes from the community. We describe our platform, analyze the data, and explain the statistical methods for ranking the models. To further promote the research in building model-based evaluation metrics, we release a cleaned version of our preference data for the three tasks, namely GenAI-Bench. We prompt the existing multi-modal models like Gemini, and GPT-4o to mimic human voting. We compute the accuracy by comparing the model voting with the human voting to understand their judging abilities. Our results show existing multimodal models are still lagging in assessing the generated visual content, even the best model GPT-4o only achieves an average accuracy of $49.19\%$ across the three generative tasks. Open-source MLLMs perform even worse due to the lack of instruction-following and reasoning ability in complex vision scenarios.
GenAI Arena: An Open Evaluation Platform for Generative Models
[ "Dongfu Jiang", "Max Ku", "Tianle Li", "Yuansheng Ni", "Shizhuo Sun", "Rongqi Fan", "Wenhu Chen" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.04485
[ "" ]
https://huggingface.co/papers/2406.04485
6
20
0
7
[]
[ "TIGER-Lab/GenAI-Bench" ]
[ "TIGER-Lab/GenAI-Arena" ]
[]
[ "TIGER-Lab/GenAI-Bench" ]
[ "TIGER-Lab/GenAI-Arena" ]
1
null
https://openreview.net/forum?id=0G8AXwtmy2
@inproceedings{ yeh2024tvs, title={T2Vs Meet {VLM}s: A Scalable Multimodal Dataset for Visual Harmfulness Recognition}, author={Chen Yeh and You-Ming Chang and Wei-Chen Chiu and Ning Yu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=0G8AXwtmy2} }
While widespread access to the Internet and the rapid advancement of generative models boost people's creativity and productivity, the risk of encountering inappropriate or harmful content also increases. To address the aforementioned issue, researchers managed to incorporate several harmful contents datasets with machine learning methods to detect harmful concepts. However, existing harmful datasets are curated by the presence of a narrow range of harmful objects, and only cover real harmful content sources. This restricts the generalizability of methods based on such datasets and leads to the potential misjudgment in certain cases. Therefore, we propose a comprehensive and extensive harmful dataset, **VHD11K**, consisting of 10,000 images and 1,000 videos, crawled from the Internet and generated by 4 generative models, across a total of 10 harmful categories covering a full spectrum of harmful concepts with non-trival definition. We also propose a novel annotation framework by formulating the annotation process as a multi-agent Visual Question Answering (VQA) task, having 3 different VLMs "debate" about whether the given image/video is harmful, and incorporating the in-context learning strategy in the debating process. Therefore, we can ensure that the VLMs consider the context of the given image/video and both sides of the arguments thoroughly before making decisions, further reducing the likelihood of misjudgments in edge cases. Evaluation and experimental results demonstrate that (1) the great alignment between the annotation from our novel annotation framework and those from human, ensuring the reliability of VHD11K; (2) our full-spectrum harmful dataset successfully identifies the inability of existing harmful content detection methods to detect extensive harmful contents and improves the performance of existing harmfulness recognition methods; (3) our dataset outperforms the baseline dataset, SMID, as evidenced by the superior improvement in harmfulness recognition methods. The entire dataset is publicly available: https://huggingface.co/datasets/denny3388/VHD11K
T2Vs Meet VLMs: A Scalable Multimodal Dataset for Visual Harmfulness Recognition
[ "Chen Yeh", "You-Ming Chang", "Wei-Chen Chiu", "Ning Yu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2409.19734
[ "https://github.com/nctu-eva-lab/vhd11k" ]
https://huggingface.co/papers/2409.19734
1
0
0
4
[]
[ "denny3388/VHD11K" ]
[]
[]
[ "denny3388/VHD11K" ]
[]
1
null
https://openreview.net/forum?id=0G5OK5vmmg
@inproceedings{ cao2024wenmind, title={WenMind: A Comprehensive Benchmark for Evaluating Large Language Models in Chinese Classical Literature and Language Arts}, author={Jiahuan Cao and Yang Liu and Yongxin Shi and Kai Ding and Lianwen Jin}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=0G5OK5vmmg} }
Large Language Models (LLMs) have made significant advancements across numerous domains, but their capabilities in Chinese Classical Literature and Language Arts (CCLLA) remain largely unexplored due to the limited scope and tasks of existing benchmarks. To fill this gap, we propose WenMind, a comprehensive benchmark dedicated for evaluating LLMs in CCLLA. WenMind covers the sub-domains of Ancient Prose, Ancient Poetry, and Ancient Literary Culture, comprising 4,875 question-answer pairs, spanning 42 fine-grained tasks, 3 question formats, and 2 evaluation scenarios: domain-oriented and capability-oriented. Based on WenMind, we conduct a thorough evaluation of 31 representative LLMs, including general-purpose models and ancient Chinese LLMs. The results reveal that even the best-performing model, ERNIE-4.0, only achieves a total score of 64.3, indicating significant room for improvement of LLMs in the CCLLA domain. We also provide insights into the strengths and weaknesses of different LLMs and highlight the importance of pre-training data in achieving better results. Overall, WenMind serves as a standardized and comprehensive baseline, providing valuable insights for future CCLLA research. Our benchmark and related code are available at \url{https://github.com/SCUT-DLVCLab/WenMind}.
WenMind: A Comprehensive Benchmark for Evaluating Large Language Models in Chinese Classical Literature and Language Arts
[ "Jiahuan Cao", "Yang Liu", "Yongxin Shi", "Kai Ding", "Lianwen Jin" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=01lhHg8H9p
@inproceedings{ li2024glbench, title={{GLB}ench: A Comprehensive Benchmark for Graph with Large Language Models}, author={Yuhan Li and Peisong Wang and Xiao Zhu and Aochuan Chen and Haiyun Jiang and Deng Cai and Victor Wai Kin Chan and Jia Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=01lhHg8H9p} }
The emergence of large language models (LLMs) has revolutionized the way we interact with graphs, leading to a new paradigm called GraphLLM. Despite the rapid development of GraphLLM methods in recent years, the progress and understanding of this field remain unclear due to the lack of a benchmark with consistent experimental protocols. To bridge this gap, we introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios. GLBench provides a fair and thorough evaluation of different categories of GraphLLM methods, along with traditional baselines such as graph neural networks. Through extensive experiments on a collection of real-world datasets with consistent data processing and splitting strategies, we have uncovered several key findings. Firstly, GraphLLM methods outperform traditional baselines in supervised settings, with LLM-as-enhancers showing the most robust performance. However, using LLMs as predictors is less effective and often leads to uncontrollable output issues. We also notice that no clear scaling laws exist for current GraphLLM methods. In addition, both structures and semantics are crucial for effective zero-shot transfer, and our proposed simple baseline can even outperform several models tailored for zero-shot scenarios. The data and code of the benchmark can be found at https://github.com/NineAbyss/GLBench.
GLBench: A Comprehensive Benchmark for Graph with Large Language Models
[ "Yuhan Li", "Peisong Wang", "Xiao Zhu", "Aochuan Chen", "Haiyun Jiang", "Deng Cai", "Victor Wai Kin Chan", "Jia Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.07457
[ "https://github.com/nineabyss/glbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=00Sx577BT3
@inproceedings{ ohana2024the, title={The Well: a Large-Scale Collection of Diverse Physics Simulations for Machine Learning}, author={Ruben Ohana and Michael McCabe and Lucas Thibaut Meyer and Rudy Morel and Fruzsina Julia Agocs and Miguel Beneitez and Marsha Berger and Blakesley Burkhart and Stuart B. Dalziel and Drummond Buschman Fielding and Daniel Fortunato and Jared A. Goldberg and Keiya Hirashima and Yan-Fei Jiang and Rich Kerswell and Suryanarayana Maddu and Jonah M. Miller and Payel Mukhopadhyay and Stefan S. Nixon and Jeff Shen and Romain Watteaux and Bruno R{\'e}galdo-Saint Blancard and Fran{\c{c}}ois Rozet and Liam Holden Parker and Miles Cranmer and Shirley Ho}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=00Sx577BT3} }
Machine learning based surrogate models offer researchers powerful tools for accelerating simulation-based workflows. However, as standard datasets in this space often cover small classes of physical behavior, it can be difficult to evaluate the efficacy of new approaches. To address this gap, we introduce the Well: a large-scale collection of datasets containing numerical simulations of a wide variety of spatiotemporal physical systems. The Well draws from domain experts and numerical software developers to provide 15TB of data across 16 datasets covering diverse domains such as biological systems, fluid dynamics, acoustic scattering, as well as magneto-hydrodynamic simulations of extra-galactic fluids or supernova explosions. These datasets can be used individually or as part of a broader benchmark suite. To facilitate usage of the Well, we provide a unified PyTorch interface for training and evaluating models. We demonstrate the function of this library by introducing example baselines that highlight the new challenges posed by the complex dynamics of the Well. The code and data is available at https://github.com/PolymathicAI/the_well.
The Well: a Large-Scale Collection of Diverse Physics Simulations for Machine Learning
[ "Ruben Ohana", "Michael McCabe", "Lucas Thibaut Meyer", "Rudy Morel", "Fruzsina Julia Agocs", "Miguel Beneitez", "Marsha Berger", "Blakesley Burkhart", "Stuart B. Dalziel", "Drummond Buschman Fielding", "Daniel Fortunato", "Jared A. Goldberg", "Keiya Hirashima", "Yan-Fei Jiang", "Rich Kerswell", "Suryanarayana Maddu", "Jonah M. Miller", "Payel Mukhopadhyay", "Stefan S. Nixon", "Jeff Shen", "Romain Watteaux", "Bruno Régaldo-Saint Blancard", "François Rozet", "Liam Holden Parker", "Miles Cranmer", "Shirley Ho" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0