RyanWW commited on
Commit
484ba9b
Β·
verified Β·
1 Parent(s): ecfeb49

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -0
README.md ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models
2
+
3
+ [![Paper](https://img.shields.io/badge/Paper-arXiv-red.svg)](https://arxiv.org/abs/2510.15148)
4
+ [![Website](https://img.shields.io/badge/Website-XModBench-green.svg)](https://xingruiwang.github.io/projects/XModBench/)
5
+ [![Dataset](https://img.shields.io/badge/Dataset-XModBench-ffcc4d?logo=huggingface&logoColor=black)](https://huggingface.co/datasets/RyanWW/XModBench)
6
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
+
8
+
9
+ XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models.
10
+
11
+ ### Key Features
12
+
13
+ - **🎯 Multi-Modal Evaluation**: Comprehensive testing across text, vision, and audio modalities
14
+ - **🧩 5 Task Dimensions**: Perception, Spatial, Temporal, Linguistic, and Knowledge tasks
15
+ - **πŸ“Š 13 SOTA Models Evaluated**: Including Gemini 2.5 Pro, Qwen2.5-Omni, EchoInk-R1, and more
16
+ - **πŸ”„ Consistency Analysis**: Measures performance stability across different modal configurations
17
+ - **πŸ‘₯ Human Performance Baseline**: Establishes human-level benchmarks for comparison
18
+
19
+
20
+ ## πŸš€ Quick Start
21
+
22
+ ### Installation
23
+
24
+ ```bash
25
+ # Clone the repository
26
+ git clone https://github.com/XingruiWang/XModBench.git
27
+ cd XModBench
28
+
29
+ # Install dependencies
30
+ pip install -r requirements.txt
31
+ ```
32
+
33
+ ## πŸ“‚ Dataset Structure
34
+
35
+ ```
36
+ XModBench/
37
+ β”œβ”€β”€ data/
38
+ β”‚ β”œβ”€β”€ text/
39
+ β”‚ β”‚ β”œβ”€β”€ perception/
40
+ β”‚ β”‚ β”œβ”€β”€ spatial/
41
+ β”‚ β”‚ β”œβ”€β”€ temporal/
42
+ β”‚ β”‚ β”œβ”€β”€ linguistic/
43
+ β”‚ β”‚ └── knowledge/
44
+ β”‚ β”œβ”€β”€ vision/
45
+ β”‚ β”‚ └── [same task categories]
46
+ β”‚ └── audio/
47
+ β”‚ └── [same task categories]
48
+ β”œβ”€β”€ models/
49
+ β”‚ └── evaluation_scripts/
50
+ β”œβ”€β”€ results/
51
+ β”‚ └── model_performances/
52
+ └── analysis/
53
+ └── visualization/
54
+ ```
55
+
56
+
57
+
58
+ ### Basic Usage
59
+
60
+ ```bash
61
+
62
+
63
+ #!/bin/bash
64
+ #SBATCH --job-name=VLM_eval
65
+ #SBATCH --output=log/job_%j.out
66
+ #SBATCH --error=log/job_%j.log
67
+ #SBATCH --ntasks-per-node=1
68
+ #SBATCH --gpus-per-node=4
69
+
70
+ echo "Running on host: $(hostname)"
71
+ echo "CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES"
72
+
73
+ module load conda
74
+ # conda activate vlm
75
+ conda activate omni
76
+
77
+ export audioBench='/home/xwang378/scratch/2025/AudioBench'
78
+
79
+ # python $audioBench/scripts/run.py \
80
+ # --model gemini \
81
+ # --task_name perception/vggss_audio_vision \
82
+ # --sample 1000
83
+
84
+
85
+ # python $audioBench/scripts/run.py \
86
+ # --model gemini \
87
+ # --task_name perception/vggss_vision_audio \
88
+ # --sample 1000
89
+
90
+ # python $audioBench/scripts/run.py \
91
+ # --model gemini \
92
+ # --task_name perception/vggss_vision_text \
93
+ # --sample 1000
94
+
95
+ # python $audioBench/scripts/run.py \
96
+ # --model gemini \
97
+ # --task_name perception/vggss_audio_text \
98
+ # --sample 1000
99
+
100
+ # Qwen2.5-Omni
101
+
102
+ # python $audioBench/scripts/run.py \
103
+ # --model qwen2.5_omni \
104
+ # --task_name perception/vggss_audio_text \
105
+ # --sample 1000
106
+
107
+ python $audioBench/scripts/run.py \
108
+ --model qwen2.5_omni \
109
+ --task_name perception/vggss_vision_text \
110
+ --sample 1000
111
+
112
+
113
+ ```
114
+
115
+
116
+
117
+ ## πŸ“ˆ Benchmark Results
118
+
119
+ ### Overall Performance Comparison
120
+
121
+ | Model | Perception | Spatial | Temporal | Linguistic | Knowledge | Average |
122
+ |-------|------------|---------|----------|------------|-----------|---------|
123
+ | **Gemini 2.5 Pro** | 75.9% | 50.1% | 60.8% | 76.8% | 89.3% | 70.6% |
124
+ | **Human Performance** | 91.0% | 89.7% | 88.9% | 93.9% | 93.9% | 91.5% |
125
+
126
+ ### Key Findings
127
+
128
+ #### 1️⃣ Task Competence Gaps
129
+ - **Strong Performance**: Perception and linguistic tasks (~75% for best models)
130
+ - **Weak Performance**: Spatial (50.1%) and temporal reasoning (60.8%)
131
+ - **Performance Drop**: 15-25 points decrease in spatial/temporal vs. perception tasks
132
+
133
+ #### 2️⃣ Modality Disparity
134
+ - **Audio vs. Text**: 20-49 point performance drop
135
+ - **Audio vs. Vision**: 33-point average gap
136
+ - **Vision vs. Text**: ~15-point disparity
137
+ - **Consistency**: Best models show 10-12 point standard deviation
138
+
139
+ #### 3️⃣ Directional Imbalance
140
+ - **Vision↔Text**: 9-17 point gaps between directions
141
+ - **Audio↔Text**: 6-8 point asymmetries
142
+ - **Root Cause**: Training data imbalance favoring image-to-text over inverse directions
143
+
144
+ ## πŸ“ Citation
145
+
146
+ If you use XModBench in your research, please cite our paper:
147
+
148
+ ```bibtex
149
+ @article{wang2024xmodbench,
150
+ title={XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models},
151
+ author={Wang, Xingrui and Others},
152
+ journal={arXiv preprint arXiv:xxxx.xxxxx},
153
+ year={2024}
154
+ }
155
+ ```
156
+
157
+ ## πŸ“„ License
158
+
159
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
160
+
161
+ ## πŸ™ Acknowledgments
162
+
163
+ We thank all contributors and the research community for their valuable feedback and suggestions.
164
+
165
+ ## πŸ“§ Contact
166
+
167
+ - **Project Lead**: Xingrui Wang
168
+ - **Email**: [xingrui.wang@example.edu]
169
+ - **Website**: [https://xingruiwang.github.io/projects/XModBench/](https://xingruiwang.github.io/projects/XModBench/)
170
+
171
+ ## πŸ”— Links
172
+
173
+ - [Project Website](https://xingruiwang.github.io/projects/XModBench/)
174
+ - [Paper](https://arxiv.org/abs/xxxx.xxxxx)
175
+ - [Leaderboard](https://xingruiwang.github.io/projects/XModBench/leaderboard)
176
+ - [Documentation](https://xingruiwang.github.io/projects/XModBench/docs)
177
+
178
+
179
+ ## Todo
180
+
181
+ - [ ] Release Huggingface data
182
+ - [x] Release data processing code
183
+ - [x] Release data evaluation code
184
+ ---
185
+
186
+ **Note**: XModBench is actively maintained and regularly updated with new models and evaluation metrics. For the latest updates, please check our [releases](https://github.com/XingruiWang/XModBench/releases) page.
187
+