ICLR 2026 Orals

Vid-LLM: A Compact Video-based 3D Multimodal LLM with Reconstruction–Reasoning Synergy

Haijier Chen, Bo Xu, Shoujian zhang, Haoze Liu, Jiaxuan Lin, Jingrong Wang

LLMs & Reasoning Thu, Apr 23 · 4:27 PM–4:37 PM · 202 A/B Avg rating: 6.67 (6–8)

Abstract

Recent developments in Multimodal Large Language Models (MLLMs) have significantly improved Vision–Language (VL) reasoning in 2D domains. However, extending these capabilities to 3D scene understanding remains a major challenge. Existing 3D Multimodal Large Language Models (3D-MLLMs) often depend on 3D data inputs, which limits scalability and generalization. To address this limitation, we propose Vid-LLM, a video-based 3D-MLLM that directly processes video inputs without requiring external 3D data, making it practical for real-world deployment. In our method, the geometric prior are directly used to improve the performance of the sceen perception. To integrate the geometric cues into the MLLM compactly, we design a Cross-Task Adapter (CTA) module to align the 3D geometric priors with the vision-language representations. To ensure geometric consistency and integrity, we introduce a Metric Depth Model that recovers real-scale geometry from the reconstruction outputs. Finally, the model is fine-tuned with a two-stage distillation optimization strategy, realizing fast convergence and stabilizes training. Extensive experiments across diverse benchmarks verified the effectiveness of our method on 3D Question Answering, 3D Dense Captioning and 3D Visual Grounding tasks, demonstrating the superior multi-task capabilities.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Vid-LLM is a video-based 3D multimodal LLM that extracts geometric cues from videos without external 3D data for 3D scene understanding.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Proposes video-based 3D-MLLM directly processing video inputs without requiring external 3D data, improving scalability and generalization
  • Designs Cross-Task Adapter module to align 3D geometric priors with vision-language representations
  • Introduces Metric Depth Model recovering real-scale geometry from reconstruction outputs ensuring geometric consistency
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Video processing
  • 3D reconstruction
  • Cross-task adaptation
  • Metric depth estimation
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • video-based 3D MLLM
  • geometric priors
  • Cross-Task Adapter
  • Metric Depth calibration

Related orals

Something off? Let us know →