XModBench / README.md
RyanWW's picture
Update README.md
64e7294 verified
metadata
license: apache-2.0
task_categories:
  - multiple-choice
language:
  - en
  - zh
tags:
  - audio-visual
  - omnimodality
  - multi-modality
  - benchmark
pretty_name: 'XModBench '
size_categories:
  - 10K<n<100K

XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models

XModBench teaser

Paper Website Dataset GitHub Repo License: MIT

XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models.

Key Features

  • 🎯 Multi-Modal Evaluation: Comprehensive testing across text, vision, and audio modalities
  • 🧩 5 Task Dimensions: Perception, Spatial, Temporal, Linguistic, and Knowledge tasks
  • πŸ“Š 13 SOTA Models Evaluated: Including Gemini 2.5 Pro, Qwen2.5-Omni, EchoInk-R1, and more
  • πŸ”„ Consistency Analysis: Measures performance stability across different modal configurations
  • πŸ‘₯ Human Performance Baseline: Establishes human-level benchmarks for comparison

πŸš€ Quick Start

Installation

# Clone the repository
git clone https://github.com/XingruiWang/XModBench.git
cd XModBench

# Install dependencies
pip install -r requirements.txt

πŸ“‚ Dataset Structure

Download and Setup

After cloning from HuggingFace, you'll need to extract the data:

# Download the dataset from HuggingFace
git clone https://huggingface.co/datasets/RyanWW/XModBench

cd XModBench

# Extract the Data.zip file
unzip Data.zip

# Now you have the following structure:

Directory Structure

XModBench/
β”œβ”€β”€ Data/                              # Unzipped from Data.zip
β”‚   β”œβ”€β”€ landscape_audiobench/          # Nature sound scenes
β”‚   β”œβ”€β”€ emotions/                      # Emotion classification data
β”‚   β”œβ”€β”€ solos_processed/               # Musical instrument solos
β”‚   β”œβ”€β”€ gtzan-dataset-music-genre-classification/  # Music genre data
β”‚   β”œβ”€β”€ singers_data_processed/        # Singer identification
β”‚   β”œβ”€β”€ temporal_audiobench/           # Temporal reasoning tasks
β”‚   β”œβ”€β”€ urbansas_samples_videos_filtered/  # Urban 3D movements
β”‚   β”œβ”€β”€ STARSS23_processed_augmented/  # Spatial audio panorama
β”‚   β”œβ”€β”€ vggss_audio_bench/             # Fine-grained audio-visual
β”‚   β”œβ”€β”€ URMP_processed/                # Musical instrument arrangements
β”‚   β”œβ”€β”€ ExtremCountAV/                 # Counting tasks
β”‚   β”œβ”€β”€ posters/                       # Movie posters
β”‚   └── trailer_clips/                 # Movie trailers
β”‚
└── tasks/                             # Task configurations (ready to use)
    β”œβ”€β”€ 01_perception/                 # Perception tasks
    β”‚   β”œβ”€β”€ finegrained/               # Fine-grained recognition
    β”‚   β”œβ”€β”€ natures/                   # Nature scenes
    β”‚   β”œβ”€β”€ instruments/               # Musical instruments
    β”‚   β”œβ”€β”€ instruments_comp/          # Instrument compositions
    β”‚   └── general_activities/        # General activities
    β”œβ”€β”€ 02_spatial/                    # Spatial reasoning tasks
    β”‚   β”œβ”€β”€ 3D_movements/              # 3D movement tracking
    β”‚   β”œβ”€β”€ panaroma/                  # Panoramic spatial audio
    β”‚   └── arrangements/              # Spatial arrangements
    β”œβ”€β”€ 03_speech/                     # Speech and language tasks
    β”‚   β”œβ”€β”€ recognition/               # Speech recognition
    β”‚   └── translation/               # Translation
    β”œβ”€β”€ 04_temporal/                   # Temporal reasoning tasks
    β”‚   β”œβ”€β”€ count/                     # Temporal counting
    β”‚   β”œβ”€β”€ order/                     # Temporal ordering
    β”‚   └── calculation/               # Temporal calculations
    └── 05_Exteral/                    # Additional classification tasks
        β”œβ”€β”€ emotion_classification/    # Emotion recognition
        β”œβ”€β”€ music_genre_classification/ # Music genre
        β”œβ”€β”€ singer_identification/     # Singer identification
        └── movie_matching/            # Movie matching

Note: All file paths in the task JSON files use relative paths (./benchmark/Data/...), so ensure your working directory is set correctly when running evaluations.

Basic Usage



#!/bin/bash
#SBATCH --job-name=VLM_eval        
#SBATCH --output=log/job_%j.out
#SBATCH --error=log/job_%j.log                        
#SBATCH --ntasks-per-node=1
#SBATCH --gpus-per-node=4

echo "Running on host: $(hostname)"
echo "CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES"

module load conda
# conda activate vlm
conda activate omni

export audioBench='/home/xwang378/scratch/2025/AudioBench'

# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_audio_vision \
#     --sample 1000


# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_vision_audio \
#     --sample 1000

# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_vision_text \
#     --sample 1000

# python $audioBench/scripts/run.py \
#     --model gemini \
#     --task_name perception/vggss_audio_text \
#     --sample 1000

# Qwen2.5-Omni

# python $audioBench/scripts/run.py \
#         --model qwen2.5_omni \
#         --task_name perception/vggss_audio_text \
#         --sample 1000

python $audioBench/scripts/run.py \
        --model qwen2.5_omni \
        --task_name perception/vggss_vision_text \
        --sample 1000

πŸ“ˆ Benchmark Results

Overall Performance Comparison

Model Perception Spatial Temporal Linguistic Knowledge Average
Gemini 2.5 Pro 75.9% 50.1% 60.8% 76.8% 89.3% 70.6%
Human Performance 91.0% 89.7% 88.9% 93.9% 93.9% 91.5%

Key Findings

1️⃣ Task Competence Gaps

  • Strong Performance: Perception and linguistic tasks (~75% for best models)
  • Weak Performance: Spatial (50.1%) and temporal reasoning (60.8%)
  • Performance Drop: 15-25 points decrease in spatial/temporal vs. perception tasks

2️⃣ Modality Disparity

  • Audio vs. Text: 20-49 point performance drop
  • Audio vs. Vision: 33-point average gap
  • Vision vs. Text: ~15-point disparity
  • Consistency: Best models show 10-12 point standard deviation

3️⃣ Directional Imbalance

  • Vision↔Text: 9-17 point gaps between directions
  • Audio↔Text: 6-8 point asymmetries
  • Root Cause: Training data imbalance favoring image-to-text over inverse directions

πŸ“ Citation

If you use XModBench in your research, please cite our paper:

@article{wang2024xmodbench,
  title={XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models},
  author={Wang, Xingrui, etc.},
  journal={arXiv preprint arXiv:2510.15148},
  year={2024}
}

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

We thank all contributors and the research community for their valuable feedback and suggestions.

πŸ“§ Contact

πŸ”— Links

Todo

  • Release Huggingface data
  • Release data processing code
  • Release data evaluation code

Note: XModBench is actively maintained and regularly updated with new models and evaluation metrics. For the latest updates, please check our releases page.