MetaEmbed: Scaling Multimodal Retrieval at Test-Time with Flexible Late Interaction
Zilin Xiao, Qi Ma, Mengting Gu, Chun-cheng Jason Chen, Xintao Chen, Vicente Ordonez, Vijai Mohan
Abstract
Universal multimodal embedding models have achieved great success in capturing semantic relevance between queries and candidates. However, current methods either condense queries and candidates into a single vector, potentially limiting the expressiveness for fine-grained information, or produce too many vectors that are prohibitively expensive for multi-vector retrieval. In this work, we introduce MetaEmbed, a new framework for multimodal retrieval that rethinks how multimodal embeddings are constructed and interacted with at scale. During training, a fixed number of learnable Meta Tokens are appended to the input sequence. At test-time, their last-layer contextualized representations serve as compact yet expressive multi-vector embeddings. Through the proposed Matryoshka Multi-Vector Retrieval training, MetaEmbed learns to organize information by granularity across multiple vectors. As a result, we enable test-time scaling in multimodal retrieval where users can balance retrieval quality against efficiency demands by selecting the number of tokens used for indexing and retrieval interactions. Extensive evaluations on the Massive Multimodal Embedding Benchmark (MMEB) and the Visual Document Retrieval Benchmark (ViDoRe) confirm that MetaEmbed achieves state-of-the-art retrieval performance while scaling robustly to models with 32B parameters. Code is available at https://github.com/facebookresearch/MetaEmbed.
MetaEmbed uses learnable meta tokens with matryoshka training to enable test-time scaling for multimodal retrieval balancing quality and efficiency.
- Introduces MetaEmbed framework using learnable Meta Tokens whose contextualized representations serve as compact multi-vector embeddings
- Proposes Matryoshka Multi-Vector Retrieval training to organize information by granularity across multiple vectors
- Enables flexible late interaction at test-time where users can balance retrieval quality against efficiency by selecting number of tokens
- Meta tokens
- Matryoshka training
- Late interaction
- Multi-vector retrieval
- Massive Multimodal Embedding Benchmark (MMEB)
- Visual Document Retrieval Benchmark (ViDoRe)
Index memory consumption grows proportionally with retrieval budget, presenting challenges for large deployments though mitigatable through balanced budgets or CPU memory offloading
from the paper
Authors did not state explicit future directions.
Author keywords
- multimodal retrieval
- information retrieval
Related orals
Multimodal Aligned Semantic Knowledge for Unpaired Image-text Matching
MASK aligns semantic knowledge between images and text using word embeddings as bridges to match out-of-distribution words in unpaired matching.
ScaleCUA: Scaling Open-Source Computer Use Agents with Cross-Platform Data
ScaleCUA scales open-source computer use agents with cross-platform dataset and dual-loop data pipeline.
VibeVoice: Expressive Podcast Generation with Next-Token Diffusion
Presents VibeVoice for zero-shot expressive long-form multi-speaker podcast generation using next-token diffusion.
UALM: Unified Audio Language Model for Understanding, Generation and Reasoning
UALM unified audio language model handles understanding, text-to-audio generation, and multimodal reasoning in single model with UALM-Reason for cross-modal generative reasoning.
BioX-Bridge: Model Bridging for Unsupervised Cross-Modal Knowledge Transfer across Biosignals
BioX-Bridge enables parameter-efficient cross-modal knowledge transfer across biosignals using lightweight prototype-based bridge networks between foundation models.