ICLR 2026 Orals

Seeing Through the Brain: New Insights from Decoding Visual Stimuli with fMRI

Zheng Huang, Enpei Zhang, Weikang Qiu, Yinghao Cai, Carl Yang, Elynn Chen, Xiang Zhang, Rex Ying, Dawei Zhou, Yujun Yan

Vision & 3D Fri, Apr 24 · 4:03 PM–4:13 PM · 202 A/B Avg rating: 6.00 (4–8)
Author-provided TL;DR

We present PRISM, a framework to decode visual stimuli from fMRI with language model alignment

Abstract

Understanding how the brain encodes visual information is a central challenge in neuroscience and machine learning. A promising approach is to reconstruct visual stimuli—essentially images—from functional Magnetic Resonance Imaging (fMRI) signals. This involves two stages: transforming fMRI signals into a latent space and then using a pre-trained generative model to reconstruct images. The reconstruction quality depends on how similar the latent space is to the structure of neural activity and how well the generative model produces images from that space. Yet, it remains unclear which type of latent space best supports this transformation and how it should be organized to represent visual stimuli effectively.

We present two key findings. First, fMRI signals are more similar to the text space of a language model than to either a vision-based space or a joint text–image space. Second, text representations and the generative model should be adapted to capture the compositional nature of visual stimuli, including objects, their detailed attributes, and relationships. Building on these insights, we propose PRISM, a model that Projects fMRI sIgnals into a Structured text space as an interMediate representation for visual stimuli reconstruction. It includes an object-centric diffusion module that generates images by composing individual objects to reduce object detection errors, and an attribute/relationship search module that automatically identifies key attributes and relationships that best align with the neural activity. Extensive experiments on real-world datasets demonstrate that our framework outperforms existing methods, achieving up to an 6% reduction in perceptual loss. These results highlight the importance of using structured text as an intermediate space to bridge fMRI signals and image reconstruction. Codes are available at https://github.com/GraphmindDartmouth/PRISM.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

PRISM framework projects fMRI signals into structured text space for visual stimulus reconstruction with object-centric diffusion and attribute search modules.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Discovery that fMRI signals align more closely with language model text space than vision or joint representations
  • Object-centric diffusion module generating images by composing individual objects
  • Attribute/relationship search module automatically identifying neural-aligned attributes
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • fMRI decoding
  • text space projection
  • diffusion models
  • object composition
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • Neuroscience
  • Functional Magnetic Resonance Imaging
  • Image reconstruction
  • Reconstruction

Related orals

Something off? Let us know →