ICLR 2026 Orals

On the Generalization Capacities of MLLMs for Spatial Intelligence

Gongjie Zhang, Wenhao Li, Quanhao Qian, Jiuniu Wang, Deli Zhao, Shijian Lu, Ran Xu

LLMs & Reasoning Thu, Apr 23 · 3:39 PM–3:49 PM · 202 A/B Avg rating: 6.00 (4–8)
Author-provided TL;DR

We show that RGB-only MLLMs are fundamentally flawed for spatial reasoning due to an inherent geometric ambiguity, and propose a camera-aware MLLM framework that incorporates camera intrinsics for robust, generalizable spatial intelligence.

Abstract

Multimodal Large Language Models (MLLMs) that directly process RGB inputs for tasks like 3D localization and navigation have shown remarkable potential. However, we argue that these ``RGB-only'' approaches are fundamentally flawed in their ability to generalize across cameras. By ignoring camera parameters, they entangle an object's physical properties with the camera's perspective, creating an irresolvable ambiguity. We show this leads MLLMs to overfit to the training camera distribution, rather than learning true and generalizable 3D geometric principles. To address this, we propose Camera-Aware MLLM framework for spatial MLLMs. It learns generalizable spatial reasoning by: (i) injecting camera intrinsics via a dense embedding that conditions each visual token; (ii) introducing a camera-aware data augmentation strategy that synthetically varies camera parameters, forcing the model to disentangle camera properties from scene content; and (iii) distilling geometric priors from a 3D vision foundation model. Extensive experiments demonstrate that camera-aware MLLMs substantially outperform their naive counterparts, particularly in cross-camera generalization tests on spatially-grounded tasks, indicating that camera-awareness is not only beneficial but also a prerequisite for robust and generalizable spatial intelligence in MLLMs.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Camera-Aware MLLM framework improves spatial reasoning by injecting camera parameters and using geometric augmentation.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Camera intrinsics injection via dense embeddings conditioning visual tokens for camera-aware representations
  • Camera-aware data augmentation strategy that synthetically varies camera parameters to disentangle camera from content
  • Distillation of geometric priors from 3D vision foundation models to improve spatial intelligence
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Dense embeddings
  • Camera parameter augmentation
  • Knowledge distillation
  • Vision-language models
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • 3D Computer Vision
  • Multimodal Large Language Model
  • Spatial Intelligence
  • Embodied AI

Related orals

Something off? Let us know →