ICLR 2026 Orals

Generating metamers of human scene understanding

Ritik Raina, Abe Leite, Alexandros Graikos, Seoyoung Ahn, Dimitris Samaras, Greg Zelinsky

Diffusion & Flow Matching Sat, Apr 25 · 11:30 AM–11:40 AM · 204 A/B Avg rating: 6.67 (6–8)

Abstract

Human vision combines low-resolution “gist” information from the visual periphery with sparse but high-resolution information from fixated locations to construct a coherent understanding of a visual scene. In this paper, we introduce MetamerGen, a tool for generating scenes that are aligned with latent human scene representations. MetamerGen is a latent diffusion model that combines peripherally obtained scene gist information with information obtained from scene-viewing fixations to generate image metamers for what humans understand after viewing a scene. Generating images from both high and low resolution (i.e. “foveated”) inputs constitutes a novel image-to-image synthesis problem, which we tackle by introducing a dual-stream representation of the foveated scenes consisting of DINOv2 tokens that fuse detailed features from fixated areas with peripherally degraded features capturing scene context. To evaluate the perceptual alignment of MetamerGen generated images to latent human scene representations, we conducted a same-different behavioral experiment where participants were asked for a “same” or “different” response between the generated and the original image. With that, we identify scene generations that are indeed metamers for the latent scene representations formed by the viewers. MetamerGen is a powerful tool for understanding scene understanding. Our proof-of-concept analyses uncovered specific features at multiple levels of visual processing that contributed to human judgments. While it can generate metamers even conditioned on random fixations, we find that high-level semantic alignment most strongly predicts metamerism when the generated scenes are conditioned on viewers’ own fixated regions.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

EditVerse unifies image and video generation/editing via token sequences enabling cross-modal knowledge transfer.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Unified architecture representing text, images and videos as single interleaved token sequence with full self-attention
  • Flexible handling of arbitrary resolutions and durations with natural cross-modal knowledge transfer
  • Scalable video editing data pipeline curating 232K samples enabling joint image-video training
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Self-attention mechanisms
  • In-context learning
  • Token sequence representation
  • Cross-modal learning
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • human scene understanding
  • generative modeling

Related orals

Generative Human Geometry Distribution

Introduces distribution-over-distribution model combining geometry distributions with two-stage flow matching for human 3D generation.

Avg rating: 5.50 (2–8) · Xiangjun Tang et al.
Something off? Let us know →