Datasets:

Modalities:
Image
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
Ego3D-Bench / README.md
mgholami's picture
Update README.md
4800c82 verified
metadata
license: cc-by-4.0

Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-view Scenes

arXiv Website Code: Code

⚖️ Ego3D-Bench Overview

We introduce Ego3D-Bench, a benchmark designed to evaluate the spatial understanding of VLMs in ego-centric multi-view scenarios. Images are collected from three different datasets: NuScenes, Argoverse, and Waymo. Questions are designed to require cross-view reseasoning. We define question from the ego-perspective and from the perspective of objects in the scene. To clearly indicate the perspective of each question, we categorize them into ego-centric or object-centric. In total we have 10 questions: 8 multi-choice QAs and 2 exact number QAs. Figure

Sample


⚖️ Ego3D-Bench vs. other Spatial Reasoning Benchmarks:


📄 Dataset Access and License Notice:

This dataset includes a subsample of the Waymo Open Dataset (WOD) and is governed by the Waymo Open Dataset License Agreement. Please review the full license terms at: https://waymo.com/open/terms

🔒 Access and Usage Conditions

  • License Compliance: This dataset is derived from the Waymo Open Dataset (WOD). All use of this dataset must comply with the terms outlined in the WOD license.

  • Non-Commercial Use Only:This dataset is made available exclusively for non-commercial research purposes. Any commercial use is strictly prohibited.

  • Access Agreement: Requesting or accessing this dataset constitutes your agreement to the Waymo Open Dataset License.


📌 Benchmarking on Ego3D-Bench:

Refer to the GitHub page (https://github.com/vbdi/Ego3D-Bench) to perform benchmarking using this dataset.


Citation:

If you find our paper and code useful in your research, please consider giving us a star ⭐ and citing our work 📝 :)

@misc{gholami2025spatialreasoningvisionlanguagemodels,
      title={Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-View Scenes}, 
      author={Mohsen Gholami and Ahmad Rezaei and Zhou Weimin and Sitong Mao and Shunbo Zhou and Yong Zhang and Mohammad Akbari},
      year={2025},
      eprint={2509.06266},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.06266}, 
}