metadata
task_categories:
- image-text-to-text
license: cc-by-nc-4.0
MMSI-Bench
This repo contains evaluation code for the paper "[MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence]"
π Homepage | π€ Dataset | π Paper | π» Code | π arXiv |
πNews
π₯[2025-05-31]: MMSI-Bench has been supported in the VLMEvalKit repository.
π₯[2025-05-30]: We released the ArXiv paper.
Load Dataset
from datasets import load_dataset
vsi_bench = load_dataset("RunsenXu/MMSI-Bench")
print(dataset)
Evaluation
Please refer to the evaluation guidelines of VLMEvalKit
π MMSI-Bench Leaderboard
| Model | Avg. (%) | Type |
|---|---|---|
| π₯ Human Level | 97.2 | Baseline |
| π₯ o3 | 41.0 | Proprietary |
| π₯ GPT-4.5 | 40.3 | Proprietary |
| Gemini-2.5-Pro--Thinking | 37.0 | Proprietary |
| Gemini-2.5-Pro | 36.9 | Proprietary |
| Doubao-1.5-pro | 33.0 | Proprietary |
| Qwen2.5-VL-72B | 30.7 | Open-source |
| NVILA-15B | 30.5 | Open-source |
| GPT-4.1 | 30.9 | Proprietary |
| GPT-4o | 30.3 | Proprietary |
| Claude-3.7-Sonnet--Thinking | 30.2 | Proprietary |
| Seed1.5-VL | 29.7 | Proprietary |
| DeepSeek-VL2-Small | 28.6 | Open-source |
| InternVL2.5-8B | 28.7 | Open-source |
| InternVL3-78B | 28.5 | Open-source |
| InternVL2.5-78B | 28.5 | Open-source |
| LLaVA-OneVision-72B | 28.4 | Open-source |
| InternVL2.5-2B | 29.0 | Open-source |
| InternVL2.5-26B | 28.0 | Open-source |
| NVILA-8B | 28.1 | Open-source |
| DeepSeek-VL2 | 27.1 | Open-source |
| InternVL3-1B | 27.0 | Open-source |
| InternVL3-9B | 26.7 | Open-source |
| Qwen2.5-VL-3B | 26.5 | Open-source |
| InternVL2.5-1B | 26.1 | Open-source |
| InternVL2.5-4B | 26.3 | Open-source |
| InternVL3-8B | 25.7 | Open-source |
| Qwen2.5-VL-7B | 25.9 | Open-source |
| InternVL3-2B | 25.3 | Open-source |
| Llama-3.2-11B-Vision | 25.4 | Open-source |
| π Random Guessing | 25.0 | Baseline |
| LLaVA-OneVision-7B | 24.5 | Open-source |
| DeepSeek-VL2-Tiny | 24.0 | Open-source |
| Blind GPT-4o | 22.7 | Baseline |
Acknowledgment
MMSI-Bench makes use of data from existing image datasets: ScanNet, nuScenes, Matterport3D, Ego4D, AgiBot-World, DTU, DAVIS-2017 ,and Waymo. We thank these teams for their open-source contributions.
Contact
- Sihan Yang: sihany077@gmail.com
- Runsen Xu: runsxu@gmail.com
Citation
BibTeX: