| --- |
| license: apache-2.0 |
| task_categories: |
| - multiple-choice |
| language: |
| - en |
| - zh |
| tags: |
| - audio-visual |
| - omnimodality |
| - multi-modality |
| - benchmark |
| pretty_name: 'XModBench ' |
| size_categories: |
| - 10K<n<100K |
| --- |
| |
| <h1 align="center"> |
| XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models |
| </h1> |
|
|
| <p align="center"> |
| <img src="https://xingruiwang.github.io/projects/XModBench/static/images/teaser.png" width="90%" alt="XModBench teaser"> |
| </p> |
|
|
| <p align="center"> |
| <a href="https://arxiv.org/abs/2510.15148"> |
| <img src="https://img.shields.io/badge/Arxiv-Paper-b31b1b.svg" alt="Paper"> |
| </a> |
| <a href="https://xingruiwang.github.io/projects/XModBench/"> |
| <img src="https://img.shields.io/badge/Website-Page-0a7aca?logo=globe&logoColor=white" alt="Website"> |
| </a> |
| <a href="https://huggingface.co/datasets/RyanWW/XModBench"> |
| <img src="https://img.shields.io/badge/Huggingface-Dataset-FFD21E?logo=huggingface" alt="Dataset"> |
| </a> |
| <a href="https://github.com/XingruiWang/XModBench"> |
| <img src="https://img.shields.io/badge/Github-Code-181717?logo=github&logoColor=white" alt="GitHub Repo"> |
| </a> |
| <a href="https://opensource.org/licenses/MIT"> |
| <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT"> |
| </a> |
| </p> |
| |
|
|
|
|
| XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models. |
|
|
| ### Key Features |
|
|
| - **π― Multi-Modal Evaluation**: Comprehensive testing across text, vision, and audio modalities |
| - **π§© 5 Task Dimensions**: Perception, Spatial, Temporal, Linguistic, and Knowledge tasks |
| - **π 13 SOTA Models Evaluated**: Including Gemini 2.5 Pro, Qwen2.5-Omni, EchoInk-R1, and more |
| - **π Consistency Analysis**: Measures performance stability across different modal configurations |
| - **π₯ Human Performance Baseline**: Establishes human-level benchmarks for comparison |
|
|
|
|
| ## π Quick Start |
|
|
| ### Installation |
|
|
| ```bash |
| # Clone the repository |
| git clone https://github.com/XingruiWang/XModBench.git |
| cd XModBench |
| |
| # Install dependencies |
| pip install -r requirements.txt |
| ``` |
|
|
| ## π Dataset Structure |
|
|
| ### Download and Setup |
|
|
| After cloning from HuggingFace, you'll need to extract the data: |
|
|
| ```bash |
| # Download the dataset from HuggingFace |
| git clone https://huggingface.co/datasets/RyanWW/XModBench |
| |
| cd XModBench |
| |
| # Extract the Data.zip file |
| unzip Data.zip |
| |
| # Now you have the following structure: |
| ``` |
|
|
| ### Directory Structure |
|
|
| ``` |
| XModBench/ |
| βββ Data/ # Unzipped from Data.zip |
| β βββ landscape_audiobench/ # Nature sound scenes |
| β βββ emotions/ # Emotion classification data |
| β βββ solos_processed/ # Musical instrument solos |
| β βββ gtzan-dataset-music-genre-classification/ # Music genre data |
| β βββ singers_data_processed/ # Singer identification |
| β βββ temporal_audiobench/ # Temporal reasoning tasks |
| β βββ urbansas_samples_videos_filtered/ # Urban 3D movements |
| β βββ STARSS23_processed_augmented/ # Spatial audio panorama |
| β βββ vggss_audio_bench/ # Fine-grained audio-visual |
| β βββ URMP_processed/ # Musical instrument arrangements |
| β βββ ExtremCountAV/ # Counting tasks |
| β βββ posters/ # Movie posters |
| β βββ trailer_clips/ # Movie trailers |
| β |
| βββ tasks/ # Task configurations (ready to use) |
| βββ 01_perception/ # Perception tasks |
| β βββ finegrained/ # Fine-grained recognition |
| β βββ natures/ # Nature scenes |
| β βββ instruments/ # Musical instruments |
| β βββ instruments_comp/ # Instrument compositions |
| β βββ general_activities/ # General activities |
| βββ 02_spatial/ # Spatial reasoning tasks |
| β βββ 3D_movements/ # 3D movement tracking |
| β βββ panaroma/ # Panoramic spatial audio |
| β βββ arrangements/ # Spatial arrangements |
| βββ 03_speech/ # Speech and language tasks |
| β βββ recognition/ # Speech recognition |
| β βββ translation/ # Translation |
| βββ 04_temporal/ # Temporal reasoning tasks |
| β βββ count/ # Temporal counting |
| β βββ order/ # Temporal ordering |
| β βββ calculation/ # Temporal calculations |
| βββ 05_Exteral/ # Additional classification tasks |
| βββ emotion_classification/ # Emotion recognition |
| βββ music_genre_classification/ # Music genre |
| βββ singer_identification/ # Singer identification |
| βββ movie_matching/ # Movie matching |
| ``` |
|
|
| **Note**: All file paths in the task JSON files use relative paths (`./benchmark/Data/...`), so ensure your working directory is set correctly when running evaluations. |
|
|
|
|
|
|
| ### Basic Usage |
|
|
| ```bash |
| |
| |
| #!/bin/bash |
| #SBATCH --job-name=VLM_eval |
| #SBATCH --output=log/job_%j.out |
| #SBATCH --error=log/job_%j.log |
| #SBATCH --ntasks-per-node=1 |
| #SBATCH --gpus-per-node=4 |
| |
| echo "Running on host: $(hostname)" |
| echo "CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES" |
| |
| module load conda |
| # conda activate vlm |
| conda activate omni |
| |
| export audioBench='/home/xwang378/scratch/2025/AudioBench' |
| |
| # python $audioBench/scripts/run.py \ |
| # --model gemini \ |
| # --task_name perception/vggss_audio_vision \ |
| # --sample 1000 |
| |
| |
| # python $audioBench/scripts/run.py \ |
| # --model gemini \ |
| # --task_name perception/vggss_vision_audio \ |
| # --sample 1000 |
| |
| # python $audioBench/scripts/run.py \ |
| # --model gemini \ |
| # --task_name perception/vggss_vision_text \ |
| # --sample 1000 |
| |
| # python $audioBench/scripts/run.py \ |
| # --model gemini \ |
| # --task_name perception/vggss_audio_text \ |
| # --sample 1000 |
| |
| # Qwen2.5-Omni |
| |
| # python $audioBench/scripts/run.py \ |
| # --model qwen2.5_omni \ |
| # --task_name perception/vggss_audio_text \ |
| # --sample 1000 |
| |
| python $audioBench/scripts/run.py \ |
| --model qwen2.5_omni \ |
| --task_name perception/vggss_vision_text \ |
| --sample 1000 |
| |
| |
| ``` |
|
|
|
|
|
|
| ## π Benchmark Results |
|
|
| ### Overall Performance Comparison |
|
|
| | Model | Perception | Spatial | Temporal | Linguistic | Knowledge | Average | |
| |-------|------------|---------|----------|------------|-----------|---------| |
| | **Gemini 2.5 Pro** | 75.9% | 50.1% | 60.8% | 76.8% | 89.3% | 70.6% | |
| | **Human Performance** | 91.0% | 89.7% | 88.9% | 93.9% | 93.9% | 91.5% | |
|
|
| ### Key Findings |
|
|
| #### 1οΈβ£ Task Competence Gaps |
| - **Strong Performance**: Perception and linguistic tasks (~75% for best models) |
| - **Weak Performance**: Spatial (50.1%) and temporal reasoning (60.8%) |
| - **Performance Drop**: 15-25 points decrease in spatial/temporal vs. perception tasks |
|
|
| #### 2οΈβ£ Modality Disparity |
| - **Audio vs. Text**: 20-49 point performance drop |
| - **Audio vs. Vision**: 33-point average gap |
| - **Vision vs. Text**: ~15-point disparity |
| - **Consistency**: Best models show 10-12 point standard deviation |
|
|
| #### 3οΈβ£ Directional Imbalance |
| - **VisionβText**: 9-17 point gaps between directions |
| - **AudioβText**: 6-8 point asymmetries |
| - **Root Cause**: Training data imbalance favoring image-to-text over inverse directions |
|
|
| ## π Citation |
|
|
| If you use XModBench in your research, please cite our paper: |
|
|
| ```bibtex |
| @article{wang2024xmodbench, |
| title={XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models}, |
| author={Wang, Xingrui, etc.}, |
| journal={arXiv preprint arXiv:2510.15148}, |
| year={2024} |
| } |
| ``` |
|
|
| ## π License |
|
|
| This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |
|
|
| ## π Acknowledgments |
|
|
| We thank all contributors and the research community for their valuable feedback and suggestions. |
|
|
| ## π§ Contact |
|
|
| - **Project Lead**: Xingrui Wang |
| - **Email**: [xwang378@jh.edu] |
| - **Website**: [https://xingruiwang.github.io/projects/XModBench/](https://xingruiwang.github.io/projects/XModBench/) |
|
|
| ## π Links |
|
|
| - [Project Website](https://xingruiwang.github.io/projects/XModBench/) |
| - [Paper](https://arxiv.org/abs/xxxx.xxxxx) |
| - [Leaderboard](https://xingruiwang.github.io/projects/XModBench/leaderboard) |
| - [Documentation](https://xingruiwang.github.io/projects/XModBench/docs) |
|
|
|
|
| ## Todo |
|
|
| - [ ] Release Huggingface data |
| - [x] Release data processing code |
| - [x] Release data evaluation code |
| --- |
|
|
| **Note**: XModBench is actively maintained and regularly updated with new models and evaluation metrics. For the latest updates, please check our [releases](https://github.com/XingruiWang/XModBench/releases) page. |