Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ size_categories:
|
|
| 22 |
|
| 23 |
This dataset is a benchmark designed for evaluating Multimodal Large Language Models' Basic Spatial Abilities based on authentic Psychometric theories. It is structured specifically to support both **Zero-shot** and **Few-shot** evaluation protocols.
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
The dataset is organized into two distinct splits. **Please read this carefully to ensure valid evaluation results.**
|
| 28 |
|
|
@@ -33,7 +33,7 @@ The dataset is organized into two distinct splits. **Please read this carefully
|
|
| 33 |
|
| 34 |
---
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
You can load the dataset using the Hugging Face `datasets` library.
|
| 39 |
|
|
@@ -49,7 +49,7 @@ test_dataset = load_dataset("EmbodiedCity/BasicSpatialAbility", split="test")
|
|
| 49 |
for sample in test_dataset:
|
| 50 |
image = sample['image']
|
| 51 |
question = sample['question']
|
| 52 |
-
# Model inference
|
| 53 |
```
|
| 54 |
|
| 55 |
### 2. Few-Shot Evaluation
|
|
@@ -58,7 +58,7 @@ for sample in test_dataset:
|
|
| 58 |
1. Load the validation split.
|
| 59 |
2. Format them into the prompt history.
|
| 60 |
3. Append the target question from the test split.
|
| 61 |
-
|
| 62 |
```python
|
| 63 |
from datasets import load_dataset
|
| 64 |
|
|
@@ -83,7 +83,7 @@ for sample in test_set:
|
|
| 83 |
|
| 84 |
---
|
| 85 |
|
| 86 |
-
|
| 87 |
|
| 88 |
The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial Orientation, Mental Rotation, and Spatial Visualization. Benchmarking 13 mainstream VLMs through nine validated psychometric experiments reveals significant gaps versus humans, with three key findings: 1) VLMs mirror human hierarchies (strongest in 2D orientation, weakest in 3D rotation) with independent BSAs; 2) Many smaller models surpass larger counterparts, with Qwen leading and InternVL2 lagging; 3) Interventions like CoT and few-shot training show limits from architectural constraints, while ToT demonstrates the most effective enhancement. Identified barriers include weak geometry encoding and missing dynamic simulation. By linking Psychometrics to VLMs, we provide a comprehensive BSA evaluation benchmark, a methodological perspective for embodied AI development, and a cognitive science-informed roadmap for achieving human-like spatial intelligence.
|
| 89 |
|
|
|
|
| 22 |
|
| 23 |
This dataset is a benchmark designed for evaluating Multimodal Large Language Models' Basic Spatial Abilities based on authentic Psychometric theories. It is structured specifically to support both **Zero-shot** and **Few-shot** evaluation protocols.
|
| 24 |
|
| 25 |
+
# 📂 Dataset Structure (Important)
|
| 26 |
|
| 27 |
The dataset is organized into two distinct splits. **Please read this carefully to ensure valid evaluation results.**
|
| 28 |
|
|
|
|
| 33 |
|
| 34 |
---
|
| 35 |
|
| 36 |
+
# ⚙️ Usage & Evaluation Protocol
|
| 37 |
|
| 38 |
You can load the dataset using the Hugging Face `datasets` library.
|
| 39 |
|
|
|
|
| 49 |
for sample in test_dataset:
|
| 50 |
image = sample['image']
|
| 51 |
question = sample['question']
|
| 52 |
+
# Model inference...
|
| 53 |
```
|
| 54 |
|
| 55 |
### 2. Few-Shot Evaluation
|
|
|
|
| 58 |
1. Load the validation split.
|
| 59 |
2. Format them into the prompt history.
|
| 60 |
3. Append the target question from the test split.
|
| 61 |
+
|
| 62 |
```python
|
| 63 |
from datasets import load_dataset
|
| 64 |
|
|
|
|
| 83 |
|
| 84 |
---
|
| 85 |
|
| 86 |
+
# 🔬 Underlying Theory
|
| 87 |
|
| 88 |
The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial Orientation, Mental Rotation, and Spatial Visualization. Benchmarking 13 mainstream VLMs through nine validated psychometric experiments reveals significant gaps versus humans, with three key findings: 1) VLMs mirror human hierarchies (strongest in 2D orientation, weakest in 3D rotation) with independent BSAs; 2) Many smaller models surpass larger counterparts, with Qwen leading and InternVL2 lagging; 3) Interventions like CoT and few-shot training show limits from architectural constraints, while ToT demonstrates the most effective enhancement. Identified barriers include weak geometry encoding and missing dynamic simulation. By linking Psychometrics to VLMs, we provide a comprehensive BSA evaluation benchmark, a methodological perspective for embodied AI development, and a cognitive science-informed roadmap for achieving human-like spatial intelligence.
|
| 89 |
|