Datasets:
image
imagewidth (px) 1.02k
1.2k
|
|---|
LEGO-Bench
LEGO-Bench is a benchmark designed to evaluate text-guided 3D scene synthesis using fine-grained, realistic instructions. Each instruction contains multiple constraints describing layout, materials, objects, and placements, reflecting the compositional complexity of real-world indoor scenes. The dataset includes 130 instructions paired with manually aligned 3D scenes, totaling 1,250 annotated constraints. On average, each instruction contains about 10 constraints, covering both architectural elements (walls, floors, doors, windows) and object-level relationships. Together, these detailed annotations enable systematic, constraint-level evaluation of how well generated scenes satisfy natural-language specifications.
Schema
full_data (json): contains instructions and the constraints within the instruction.
data_0 ~ data_129 (folder): contains the scene that aligns with the instruction in json format.
Evaluating LEGO-Bench with LEGO-Eval
LEGO-Eval is a tool-augmented evaluation framework for text-guided 3D scene synthesis. It enables fine-grained and interpretable assessment of instruction-scene alignment by grounding scene components using a diverse suite of 21 multimodal tools, supporting multi-hop reasoning over spatial and attribute constraints.
Citation
If you use this dataset, please cite:
@article{hwangbo2025lego,
title={LEGO-Eval: Towards Fine-Grained Evaluation on Synthesizing 3D Embodied Environments with Tool Augmentation},
author={Hwangbo, Gyeom and Chae, Hyungjoo and Kang, Minseok and Ju, Hyeonjong and Oh, Soohyun and Yeo, Jinyoung},
journal={arXiv preprint arXiv:2511.03001},
year={2025}}}
- Downloads last month
- 17

