ICLR 2026 Orals

MC-Search: Evaluating and Enhancing Multimodal Agentic Search with Structured Long Reasoning Chains

Xuying Ning, Dongqi Fu, Tianxin Wei, Mengting Ai, Jiaru Zou, Ting-Wei Li, Hanghang Tong, Yada Zhu, Hendrik Hamann, Jingrui He

LLMs & Reasoning Fri, Apr 24 · 11:06 AM–11:16 AM · 203 A/B Avg rating: 5.00 (4–6)

Abstract

With the increasing demand for step-wise, cross-modal, and knowledge-grounded reasoning, multimodal large language models (MLLMs) are evolving beyond the traditional fixed retrieve-then-generate paradigm toward more sophisticated agentic multimodal retrieval-augmented generation (MM-RAG). Existing benchmarks, however, mainly focus on simplified QA with short retrieval chains, leaving adaptive planning and multimodal reasoning underexplored. We present MC-Search, the first benchmark for agentic MM-RAG with long, step-wise annotated reasoning chains spanning five representative reasoning structures. Each example specifies sub-questions, retrieval modalities, supporting facts, and intermediate answers, with fidelity ensured by HAVE (Hop-wise Attribution and Verification of Evidence), resulting in 3,333 high-quality examples averaging 3.7 hops. Beyond answer accuracy, MC-Search introduces new process-level metrics for reasoning quality, stepwise retrieval and planning accuracy. By developing a unified agentic MM-RAG pipeline, we benchmark six leading MLLMs and reveal systematic issues such as over- and under-retrieval and modality-misaligned planning. Finally, we introduce Search-Align, a process-supervised fine-tuning framework leveraging verified reasoning chains, showing that our data not only enables faithful evaluation but also improves planning and retrieval fidelity in open-source MLLMs.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

MC-Search benchmark evaluates multimodal agentic RAG with step-wise reasoning chains and introduces Search-Align for improved planning.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Presents MC-Search, first benchmark for agentic multimodal RAG with long step-wise annotated reasoning chains
  • Introduces HAVE (Hop-wise Attribution and Verification of Evidence) for fidelity assurance across 3,333 examples
  • Develops unified agentic MM-RAG pipeline benchmarking six leading MLLMs revealing systematic retrieval and planning issues
  • Introduces Search-Align, process-supervised fine-tuning framework improving planning and retrieval fidelity
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Multimodal RAG
  • Agent planning
  • Chain-of-thought reasoning
  • Fine-tuning
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • MC-Search benchmark
  • Wikipedia
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Broaden evaluation to stronger reasoning models and extend benchmark to additional domains such as science and mathematics
    from the paper

Author keywords

  • Multimodal
  • RAG
  • Vision-Language
  • Agent
  • Benchmark

Related orals

Something off? Let us know →