Thinking in Structures: Evaluating Spatial Intelligence through Reasoning on Constrained Manifolds
Paper
• 2602.07864 • Published
index int64 1 1k | image images listlengths 4 6 | question stringclasses 11 values | answer stringclasses 30 values | annotation_color stringclasses 6 values | category stringclasses 2 values | task stringclasses 10 values |
|---|---|---|---|---|---|---|
376 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 3, 1, 4] | yellow | geometric | ground_height | |
302 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 3, 2, 4] | cyan | geometric | ground_height | |
344 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 3, 4, 2] | cyan | geometric | ground_height | |
342 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 3, 4, 1] | magenta | geometric | ground_height | |
345 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 2, 1, 4] | red | geometric | ground_height | |
230 | GROUND_ANGLE_PROMPT_TEMPLATE | [1, 3, 4, 2] | yellow | geometric | ground_angle | |
203 | GROUND_ANGLE_PROMPT_TEMPLATE | [3, 1, 4, 2] | red | geometric | ground_angle | |
223 | GROUND_ANGLE_PROMPT_TEMPLATE | [1, 2, 4, 3] | cyan | geometric | ground_angle | |
226 | GROUND_ANGLE_PROMPT_TEMPLATE | [1, 3, 4, 2] | red | geometric | ground_angle | |
206 | GROUND_ANGLE_PROMPT_TEMPLATE | [1, 3, 2, 4] | yellow | geometric | ground_angle | |
108 | DIMENSION_PROMPT_TEMPLATE | [1, 3, 4, 2] | cyan | geometric | dimension | |
117 | DIMENSION_PROMPT_TEMPLATE | [1, 3, 2, 4] | red | geometric | dimension | |
124 | DIMENSION_PROMPT_TEMPLATE | [3, 4, 1, 2] | yellow | geometric | dimension | |
183 | DIMENSION_PROMPT_TEMPLATE | [1, 2, 3, 4] | yellow | geometric | dimension | |
110 | DIMENSION_PROMPT_TEMPLATE | [3, 1, 2, 4] | red | geometric | dimension | |
563 | RELATIVE_DISTANCE_PROMPT_TEMPLATE | [3, 1, 2] | yellow | geometric | relative_distance | |
560 | RELATIVE_DISTANCE_PROMPT_TEMPLATE | [1, 2, 3] | yellow | geometric | relative_distance | |
513 | RELATIVE_DISTANCE_PROMPT_TEMPLATE | [1, 3, 2] | yellow | geometric | relative_distance | |
514 | RELATIVE_DISTANCE_PROMPT_TEMPLATE | [3, 1, 2] | yellow | geometric | relative_distance | |
517 | RELATIVE_DISTANCE_PROMPT_TEMPLATE | [1, 2, 3] | magenta | geometric | relative_distance | |
53 | AREA_PROMPT_TEMPLATE | [1, 2, 3] | cyan | geometric | area | |
77 | AREA_PROMPT_TEMPLATE | [1, 2, 3] | yellow | geometric | area | |
8 | AREA_PROMPT_TEMPLATE | [1, 2, 3] | yellow | geometric | area | |
55 | AREA_PROMPT_TEMPLATE | [3, 1, 2] | red | geometric | area | |
58 | AREA_PROMPT_TEMPLATE | [2, 3, 1] | cyan | geometric | area | |
620 | VOLUME_PROMPT_TEMPLATE | [2, 1, 3] | yellow | geometric | volume | |
616 | VOLUME_PROMPT_TEMPLATE | [2, 3, 1] | yellow | geometric | volume | |
656 | VOLUME_PROMPT_TEMPLATE | [2, 3, 1] | yellow | geometric | volume | |
682 | VOLUME_PROMPT_TEMPLATE | [3, 2, 1] | yellow | geometric | volume | |
709 | VOLUME_PROMPT_TEMPLATE | [1, 3, 2] | yellow | geometric | volume | |
414 | MV_RELATIVE_DISTANCE_PROMPT_TEMPLATE | [2, 3, 1] | red | geometric | multi_view_geometric | |
505 | MV_RELATIVE_DISTANCE_PROMPT_TEMPLATE | [2, 3, 1] | yellow | geometric | multi_view_geometric | |
430 | MV_RELATIVE_DISTANCE_PROMPT_TEMPLATE | [1, 2, 3] | yellow | geometric | multi_view_geometric | |
480 | MV_RELATIVE_DISTANCE_PROMPT_TEMPLATE | [3, 1, 2] | yellow | geometric | multi_view_geometric | |
418 | MV_RELATIVE_DISTANCE_PROMPT_TEMPLATE | [2, 3, 1] | yellow | geometric | multi_view_geometric | |
852 | HOP_DISTANCE_PROMPT_TEMPLATE | [2, 1, 3] | red | topological | hop_distance | |
867 | HOP_DISTANCE_PROMPT_TEMPLATE | [3, 1, 2] | yellow | topological | hop_distance | |
815 | HOP_DISTANCE_PROMPT_TEMPLATE | [3, 1, 2] | yellow | topological | hop_distance | |
846 | HOP_DISTANCE_PROMPT_TEMPLATE | [3, 1, 2] | yellow | topological | hop_distance | |
811 | HOP_DISTANCE_PROMPT_TEMPLATE | [1, 2, 3] | red | topological | hop_distance | |
727 | CYCLE_LENGTH_PROMPT_TEMPLATE | [2, 1, 3] | yellow | topological | cycle_length | |
721 | CYCLE_LENGTH_PROMPT_TEMPLATE | [3, 2, 1] | red | topological | cycle_length | |
731 | CYCLE_LENGTH_PROMPT_TEMPLATE | [1, 2, 3] | yellow | topological | cycle_length | |
765 | CYCLE_LENGTH_PROMPT_TEMPLATE | [3, 2, 1] | yellow | topological | cycle_length | |
715 | CYCLE_LENGTH_PROMPT_TEMPLATE | [2, 3, 1] | yellow | topological | cycle_length | |
950 | MV_CYCLE_LENGTH_PROMPT_TEMPLATE | [3, 1, 2] | magenta | topological | multi_view_topological | |
930 | MV_HOP_DISTANCE_PROMPT_TEMPLATE | [3, 1, 2] | red | topological | multi_view_topological | |
935 | MV_HOP_DISTANCE_PROMPT_TEMPLATE | [3, 2, 1] | red | topological | multi_view_topological | |
969 | MV_CYCLE_LENGTH_PROMPT_TEMPLATE | [1, 2, 3] | yellow | topological | multi_view_topological | |
908 | MV_CYCLE_LENGTH_PROMPT_TEMPLATE | [2, 1, 3] | red | topological | multi_view_topological | |
326 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 3, 4, 1] | yellow | geometric | ground_height | |
311 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 4, 1, 3] | cyan | geometric | ground_height | |
314 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 3, 1, 4] | yellow | geometric | ground_height | |
310 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 4, 3, 1] | yellow | geometric | ground_height | |
354 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 2, 3, 4] | cyan | geometric | ground_height | |
316 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 2, 4, 1] | yellow | geometric | ground_height | |
304 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 1, 3, 2] | yellow | geometric | ground_height | |
398 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 3, 4, 1] | cyan | geometric | ground_height | |
401 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 4, 1, 3] | cyan | geometric | ground_height | |
360 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 1, 2, 4] | magenta | geometric | ground_height | |
320 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 2, 1, 3] | yellow | geometric | ground_height | |
369 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 1, 2, 4] | yellow | geometric | ground_height | |
358 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 1, 3, 4] | cyan | geometric | ground_height | |
347 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 2, 4, 1] | yellow | geometric | ground_height | |
387 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 2, 4, 3] | red | geometric | ground_height | |
305 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 3, 1, 2] | cyan | geometric | ground_height | |
353 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 1, 2, 3] | yellow | geometric | ground_height | |
394 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 1, 4, 2] | cyan | geometric | ground_height | |
385 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 2, 3, 4] | red | geometric | ground_height | |
372 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 4, 2, 3] | yellow | geometric | ground_height | |
322 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 2, 3, 4] | yellow | geometric | ground_height | |
309 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 4, 1, 2] | red | geometric | ground_height | |
327 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 4, 1, 2] | cyan | geometric | ground_height | |
313 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 2, 3, 1] | red | geometric | ground_height | |
365 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 1, 2, 3] | yellow | geometric | ground_height | |
333 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 4, 1, 2] | yellow | geometric | ground_height | |
325 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 1, 2, 4] | yellow | geometric | ground_height | |
332 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 1, 3, 2] | yellow | geometric | ground_height | |
390 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 4, 3, 1] | yellow | geometric | ground_height | |
340 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 1, 2, 3] | yellow | geometric | ground_height | |
400 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 3, 2, 4] | cyan | geometric | ground_height | |
388 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 2, 4, 3] | red | geometric | ground_height | |
349 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 4, 3, 1] | yellow | geometric | ground_height | |
406 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 3, 2, 4] | red | geometric | ground_height | |
393 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 1, 4, 3] | yellow | geometric | ground_height | |
331 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 3, 2, 4] | yellow | geometric | ground_height | |
337 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 4, 2, 3] | yellow | geometric | ground_height | |
395 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 3, 1, 4] | yellow | geometric | ground_height | |
350 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 1, 2, 3] | yellow | geometric | ground_height | |
317 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 2, 1, 3] | red | geometric | ground_height | |
391 | GROUND_HEIGHT_PROMPT_TEMPLATE | [2, 4, 1, 3] | yellow | geometric | ground_height | |
377 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 4, 2, 1] | red | geometric | ground_height | |
307 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 3, 2, 4] | yellow | geometric | ground_height | |
323 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 3, 4, 2] | red | geometric | ground_height | |
318 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 2, 4, 3] | red | geometric | ground_height | |
324 | GROUND_HEIGHT_PROMPT_TEMPLATE | [3, 4, 1, 2] | yellow | geometric | ground_height | |
334 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 4, 2, 3] | cyan | geometric | ground_height | |
383 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 2, 4, 3] | yellow | geometric | ground_height | |
363 | GROUND_HEIGHT_PROMPT_TEMPLATE | [1, 4, 2, 3] | yellow | geometric | ground_height | |
373 | GROUND_HEIGHT_PROMPT_TEMPLATE | [4, 2, 3, 1] | cyan | geometric | ground_height |
SSI-Bench is constructed from complex real-world 3D structures, where feasible configurations are tightly governed by geometric, topological, and physical constraints.
from datasets import load_dataset
dataset = load_dataset("cyang203912/SSI-Bench")
print(dataset)
After downloading the parquet file, read each record, decode images from binary, and save them as JPG files.
import pandas as pd
import os
df = pd.read_parquet("SSI_Bench.parquet")
output_dir = "./images"
os.makedirs(output_dir, exist_ok=True)
for _, row in df.iterrows():
index_val = row["index"]
images = row["image"]
question = row["question"]
answer = row["answer"]
annotation_color = row["annotation_color"]
category = row["category"]
task = row["task"]
image_paths = []
if images is not None:
for n, img_data in enumerate(images):
image_path = f"{output_dir}/{index_val}_{n}.jpg"
with open(image_path, "wb") as f:
f.write(img_data)
image_paths.append(image_path)
else:
image_paths = []
print(f"index: {index_val}")
print(f"image: {image_paths}")
print(f"question: {question}")
print(f"answer: {answer}")
print(f"annotation_color: {annotation_color}")
print(f"category: {category}")
print(f"task: {task}")
print("-" * 50)
To evaluate, follow the scripts in the code repository: https://github.com/ccyydd/SSI-Bench.
| Model | Avg. (%) | Type |
|---|---|---|
| Human Performance | 91.60 | Baseline |
| Gemini-3-Flash | 33.60 | Proprietary |
| Gemini-3-Pro | 29.50 | Proprietary |
| GPT-5.2 | 29.10 | Proprietary |
| Gemini-2.5-Pro | 26.10 | Proprietary |
| GPT-5 mini | 25.90 | Proprietary |
| Seed-1.8 | 25.90 | Proprietary |
| GPT-4o | 22.60 | Proprietary |
| GPT-4.1 | 22.40 | Proprietary |
| Gemini-2.5-Flash | 22.30 | Proprietary |
| GLM-4.6V | 22.20 | Open-source |
| Qwen3-VL-235B-A22B | 21.90 | Open-source |
| GLM-4.5V | 21.40 | Open-source |
| GLM-4.6V-Flash | 21.10 | Open-source |
| Qwen3-VL-4B | 20.70 | Open-source |
| InternVL3.5-30B-A3B | 20.70 | Open-source |
| Qwen3-VL-30B-A3B | 20.60 | Open-source |
| Llama-4-Scout-17B-16E | 20.60 | Open-source |
| Gemma-3-27B | 20.50 | Open-source |
| InternVL3.5-8B | 20.20 | Open-source |
| Claude-Sonnet-4.5 | 19.90 | Proprietary |
| Gemma-3-4B | 19.70 | Open-source |
| Qwen3-VL-8B | 19.20 | Open-source |
| Qwen3-VL-2B | 19.20 | Open-source |
| InternVL3.5-38B | 19.00 | Open-source |
| InternVL3.5-241B-A28B | 18.30 | Open-source |
| InternVL3.5-14B | 17.90 | Open-source |
| Gemma-3-12B | 17.30 | Open-source |
| LLaVA-Onevision-72B | 17.20 | Open-source |
| InternVL3.5-4B | 16.80 | Open-source |
| LLaVA-Onevision-7B | 16.50 | Open-source |
| Random Guessing | 12.85 | Baseline |
| InternVL3.5-2B | 11.10 | Open-source |
@article{yang2026thinking,
title={Thinking in Structures: Evaluating Spatial Intelligence through Reasoning on Constrained Manifolds},
author={Chen Yang and Guanxin Lin and Youquan He and Peiyao Chen and Guanghe Liu and Yufan Mo and Zhouyuan Xu and Linhao Wang and Guohui Zhang and Zihang Zhang and Shenxiang Zeng and Chen Wang and Jiansheng Fan},
journal={arXiv preprint arXiv:2602.07864},
year={2026}
}