Papers
arxiv:2602.20160

tttLRM: Test-Time Training for Long Context and Autoregressive 3D Reconstruction

Published on Feb 23
· Submitted by
Chen Wang
on Feb 24
Authors:
,
,
,
,
,
,

Abstract

A novel 3D reconstruction model called tttLRM uses a Test-Time Training layer to enable efficient, scalable autoregressive reconstruction with linear complexity, achieving better results than existing methods.

AI-generated summary

We propose tttLRM, a novel large 3D reconstruction model that leverages a Test-Time Training (TTT) layer to enable long-context, autoregressive 3D reconstruction with linear computational complexity, further scaling the model's capability. Our framework efficiently compresses multiple image observations into the fast weights of the TTT layer, forming an implicit 3D representation in the latent space that can be decoded into various explicit formats, such as Gaussian Splats (GS) for downstream applications. The online learning variant of our model supports progressive 3D reconstruction and refinement from streaming observations. We demonstrate that pretraining on novel view synthesis tasks effectively transfers to explicit 3D modeling, resulting in improved reconstruction quality and faster convergence. Extensive experiments show that our method achieves superior performance in feedforward 3D Gaussian reconstruction compared to state-of-the-art approaches on both objects and scenes.

Community

Paper author Paper submitter
edited about 9 hours ago

Project Page: https://cwchenwang.github.io/tttLRM/. Code has been opensourced.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.20160 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.20160 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.20160 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.