RyanWW commited on
Commit
927370a
·
verified ·
1 Parent(s): 484ba9b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -7
README.md CHANGED
@@ -1,9 +1,45 @@
1
- # XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- [![Paper](https://img.shields.io/badge/Paper-arXiv-red.svg)](https://arxiv.org/abs/2510.15148)
4
- [![Website](https://img.shields.io/badge/Website-XModBench-green.svg)](https://xingruiwang.github.io/projects/XModBench/)
5
- [![Dataset](https://img.shields.io/badge/Dataset-XModBench-ffcc4d?logo=huggingface&logoColor=black)](https://huggingface.co/datasets/RyanWW/XModBench)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
 
9
  XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models.
@@ -183,5 +219,4 @@ We thank all contributors and the research community for their valuable feedback
183
  - [x] Release data evaluation code
184
  ---
185
 
186
- **Note**: XModBench is actively maintained and regularly updated with new models and evaluation metrics. For the latest updates, please check our [releases](https://github.com/XingruiWang/XModBench/releases) page.
187
-
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - multiple-choice
5
+ language:
6
+ - en
7
+ - zh
8
+ tags:
9
+ - audio-visual
10
+ - omnimodality
11
+ - multi-modality
12
+ - benchmark
13
+ pretty_name: 'XModBench '
14
+ size_categories:
15
+ - 10K<n<100K
16
+ ---
17
 
18
+ <h1 align="center">
19
+ XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models
20
+ </h1>
21
+
22
+ <p align="center">
23
+ <img src="https://xingruiwang.github.io/projects/XModBench/static/images/teaser.png" width="90%" alt="XModBench teaser">
24
+ </p>
25
+
26
+ <p align="center">
27
+ <a href="https://arxiv.org/abs/2510.15148">
28
+ <img src="https://img.shields.io/badge/Paper-arXiv-red.svg" alt="Paper">
29
+ </a>
30
+ <a href="https://xingruiwang.github.io/projects/XModBench/">
31
+ <img src="https://img.shields.io/badge/Website-XModBench-0a7aca?logo=globe&logoColor=white" alt="Website">
32
+ </a>
33
+ <a href="https://huggingface.co/datasets/RyanWW/XModBench">
34
+ <img src="https://img.shields.io/badge/Dataset-XModBench-FFD21E?logo=huggingface" alt="Dataset">
35
+ </a>
36
+ <a href="https://github.com/XingruiWang/XModBench">
37
+ <img src="https://img.shields.io/badge/Code-XModBench-181717?logo=github&logoColor=white" alt="GitHub Repo">
38
+ </a>
39
+ <a href="https://opensource.org/licenses/MIT">
40
+ <img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT">
41
+ </a>
42
+ </p>
43
 
44
 
45
  XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models.
 
219
  - [x] Release data evaluation code
220
  ---
221
 
222
+ **Note**: XModBench is actively maintained and regularly updated with new models and evaluation metrics. For the latest updates, please check our [releases](https://github.com/XingruiWang/XModBench/releases) page.