ICLR 2026 Orals

DepthLM: Metric Depth from Vision Language Models

Zhipeng Cai, Ching-Feng Yeh, Hu Xu, Zhuang Liu, Gregory P. Meyer, Xinjie Lei, Changsheng Zhao, Shang-Wen Li, Vikas Chandra, Yangyang Shi

LLMs & Reasoning Thu, Apr 23 · 3:51 PM–4:01 PM · 202 A/B Avg rating: 6.67 (4–10)
Author-provided TL;DR

The first proof that VLMs can have expert model level depth estimation accuracy without architecture or loss change

Abstract

Vision language models (VLMs) can flexibly address various vision tasks through text interactions. Although successful in semantic understanding, state-of-the-art VLMs including GPT-5 still struggle in understanding 3D from 2D inputs. On the other hand, expert pure vision models achieve super-human accuracy in metric depth estimation, a key 3D understanding task. However, they require task-specific architectures and losses. Such difference motivates us to ask: Can VLMs reach expert-level accuracy without architecture or loss change? We take per-pixel metric depth estimation as the representative task and show that the answer is yes! Surprisingly, comprehensive analysis shows that text-based supervised-finetuning with sparse labels is sufficient for VLMs to unlock strong 3D understanding, no dense prediction head or complex regression/regularization loss is needed. The bottleneck lies in pixel reference and cross-dataset camera ambiguity, which we address through visual prompting and intrinsic-conditioned augmentation. With much smaller models, our method DepthLM surpasses the accuracy of most advanced VLMs by over 2x, making VLMs for the first time comparable with pure vision models. The simplicity of DepthLM also enables a single VLM to cover various 3D tasks beyond metric depth. Code and model are available at https://github.com/facebookresearch/DepthLM_Official.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

DepthLM shows VLMs can match pure vision models in metric depth estimation with text-based supervised finetuning and visual prompting without architecture changes.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Demonstrates VLMs can achieve expert-level metric depth accuracy without task-specific architectures
  • Text-based supervised finetuning with sparse labels sufficient for VLMs to unlock 3D understanding
  • Visual prompting and intrinsic-conditioned augmentation address pixel reference and camera ambiguity
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Vision language models
  • Visual prompting
  • Supervised finetuning
  • Metric depth estimation
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Focus on simplest design of VLMs; fine-grained strategies could further improve performance
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Design data filtering pipelines to add more diverse datasets
    from the paper
  • Train with diverse and complementary tasks to enhance generalization
    from the paper

Author keywords

  • Metric depth
  • Vision language model
  • Spatial reasoning

Related orals

Something off? Let us know →