Datasets:

Modalities:
Image
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
File size: 2,897 Bytes
940b0a4
 
 
 
3f7106f
940b0a4
 
 
 
 
 
7dfad1b
 
940b0a4
 
 
 
 
3f7106f
 
940b0a4
 
 
 
 
0e19254
 
5f28108
0e19254
5f28108
0e19254
 
940b0a4
 
 
 
 
 
 
 
 
 
 
c4dd9c0
 
 
 
d8474ba
c4dd9c0
 
4800c82
c4dd9c0
d8474ba
 
c4dd9c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: cc-by-4.0
---
<div align="center">
  <h1>Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-view Scenes</h1>
<a href="https://arxiv.org/abs/2509.06266" target="_blank">
    <img alt="arXiv" src="https://img.shields.io/badge/arXiv-red?logo=arxiv" height="20" />
</a>
<a href="https://vbdi.github.io/Ego3D-Bench-webpage/" target="_blank">
    <img alt="Website" src="https://img.shields.io/badge/🌎_Website-blue.svg" height="20" />
</a>
<a href="https://github.com/vbdi/Ego3D-Bench" target="_blank">
    <img alt="Code: Code" src="https://img.shields.io/badge/Code-100000?logo=github&logoColor=white" height="20" />
</a>
</div>

---

### ⚖️ **Ego3D-Bench Overview**
We introduce Ego3D-Bench, a benchmark designed to evaluate the spatial understanding of VLMs in ego-centric multi-view scenarios. Images are collected from three different datasets: NuScenes, Argoverse, and Waymo. Questions are designed to require cross-view reseasoning. We define question from the ego-perspective and from the perspective of objects in the scene. To clearly indicate the perspective of each question, we categorize them into ego-centric or object-centric. In total we have 10 questions: 8 multi-choice QAs and 2 exact number QAs. Figure 

![Sample](figs/Fig5_v2.png)

---

### ⚖️ **Ego3D-Bench vs. other Spatial Reasoning Benchmarks:**
<div align="center">
<img src="figs/benchmarks.png" width="600" height="100">
</div>

---

📄 **Dataset Access and License Notice:**

This dataset includes a subsample of the Waymo Open Dataset (WOD) and is governed by the Waymo Open Dataset License Agreement.
Please review the full license terms at: https://waymo.com/open/terms

🔒 **Access and Usage Conditions**

- License Compliance: This dataset is derived from the Waymo Open Dataset (WOD). All use of this dataset must comply with the terms outlined in the WOD license.

- Non-Commercial Use Only:This dataset is made available exclusively for non-commercial research purposes. Any commercial use is strictly prohibited.

- Access Agreement: Requesting or accessing this dataset constitutes your agreement to the Waymo Open Dataset License.

---


### 📌 Benchmarking on Ego3D-Bench:

Refer to the GitHub page (https://github.com/vbdi/Ego3D-Bench) to perform benchmarking using this dataset.

---


### Citation:
If you find our paper and code useful in your research, please consider giving us a star ⭐ and citing our work 📝 :)

```
@misc{gholami2025spatialreasoningvisionlanguagemodels,
      title={Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-View Scenes}, 
      author={Mohsen Gholami and Ahmad Rezaei and Zhou Weimin and Sitong Mao and Shunbo Zhou and Yong Zhang and Mohammad Akbari},
      year={2025},
      eprint={2509.06266},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.06266}, 
}
```