ICLR 2026 Orals

Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-training

Junlin Han, Shengbang Tong, David Fan, Yufan Ren, Koustuv Sinha, Philip Torr, Filippos Kokkinos

LLMs & Reasoning Fri, Apr 24 · 3:15 PM–3:25 PM · 202 A/B Avg rating: 7.00 (6–8)
Author-provided TL;DR

Explore and understand the visual priors within LLMs and thus build better MLLMs.

Abstract

Large Language Models (LLMs), despite being trained on text alone, surprisingly develop rich visual priors. These priors allow latent visual capabilities to be unlocked for vision tasks with a relatively small amount of multimodal data, and to perform symbolic visual generation tasks without ever having seen an image. Through systematic analysis, we reveal that visual priors—the implicit, emergent knowledge about the visual world acquired during language pre-training—are composed of separable perception and reasoning priors with unique scaling trends and origins. We show that an LLM's latent visual reasoning ability is predominantly developed by pre-training on reasoning-centric data (\eg, code, math, academia) and scales progressively. This reasoning prior acquired from language pre-training is transferable and universally applicable to visual reasoning. In contrast, the perception prior emerges more diffusely from broad corpora, and perception ability is more sensitive to the vision encoder and visual instruction tuning data. In parallel, text describing the visual world proves crucial, though its performance impact saturates rapidly. Leveraging these insights, we propose a data-centric recipe for pre-training vision-aware LLMs and verify it in 1T token scale pre-training. Our findings are grounded in over 100 controlled experiments consuming 500,000 GPU-hours, spanning the full MLLM construction pipeline—from LLM pre-training to visual alignment and supervised multimodal fine-tuning—across five model scales, a wide range of data categories and mixtures, and multiple adaptation setups. Along with our main findings, we also propose and investigate several hypotheses, and introduce a Multi-Level Existence Bench (MLE-Bench) to facilitate future research. Together, this work provides a new way of deliberately cultivating visual priors from language pre-training, paving the way for the next generation of multimodal LLMs.

We recommend a visit to our project page (https://junlinhan.github.io/projects/lsbs/) for an interactive reading.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Systematic study reveals LLMs acquire visual perception priors from diverse data and reasoning priors from code/math corpora.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Data-centric roadmap for developing multimodal systems showing visual priors emerge from separable perception and reasoning components
  • Visual reasoning ability developed by reasoning-centric pre-training data with progressive scaling, universally applicable
  • Perception prior emerges diffusely from broad corpora, more sensitive to vision encoder and instruction tuning
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Controlled pre-training experiments
  • Vision-language model training
  • Data composition analysis
  • Multi-scale evaluation
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Public academic datasets
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Investigation primarily centered on adapter-style MLLM architectures; may not fully generalize to discrete visual tokenization or end-to-end joint training approaches
    from the paper
  • Does not address safety and ethical implications of learned visual priors that may encode societal biases from language corpora
    from the paper
  • Confined to static images, leaving exploration of visual priors for dynamic modalities like video as open question
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Investigate learned visual priors in alternative MLLM architectures with different training paradigms
    from the paper
  • Conduct thorough audit of fairness and safety of emergent priors to identify biased visual associations
    from the paper
  • Explore visual priors for temporal reasoning, action recognition and causality in video understanding
    from the paper

Author keywords

  • LLM pre-training
  • MLLMs
  • multi-modality

Related orals

Something off? Let us know →