ICLR 2026 Orals

FlashVID: Efficient Video Large Language Models via Training-free Tree-based Spatiotemporal Token Merging

Ziyang Fan, Keyu Chen, Ruilong Xing, Yulin Li, Li Jiang, Zhuotao Tian

LLMs & Reasoning Thu, Apr 23 · 4:03 PM–4:13 PM · 202 A/B Avg rating: 5.50 (4–8)
Author-provided TL;DR

We introduce FlashVID, a training-free and plug-and-play inference acceleration framework for Video LLMs, enabling a satisfactory speedup with negligible performance degradation.

Abstract

Although Video Large Language Models (VLLMs) have shown remarkable capabilities in video understanding, they are required to process high volumes of visual tokens, causing significant computational inefficiency. Existing VLLMs acceleration frameworks usually compress spatial and temporal redundancy independently, which overlooks the spatiotemporal relationships, thereby leading to suboptimal spatiotemporal compression. The highly correlated visual features are likely to change in spatial position, scale, orientation, and other attributes over time due to the dynamic nature of video. Building on this insight, we introduce FlashVID, a training-free inference acceleration framework for VLLMs. Specifically, FlashVID utilizes Attention and Diversity-based Token Selection (ADTS) to select the most representative tokens for basic video representation, then applies Tree-based Spatiotemporal Token Merging (TSTM) for fine-grained spatiotemporal redundancy elimination. Extensive experiments conducted on three representative VLLMs across five video understanding benchmarks demonstrate the effectiveness and generalization of our method. Notably, by retaining only $\textbf{10}$% of visual tokens, FlashVID preserves $\textbf{99.1}$% of the performance of LLaVA-OneVision. Consequently, FlashVID can serve as a training-free and plug-and-play module for extending long video frames, which enables a $\textbf{10$\times$}$ increase in video frame input to Qwen2.5-VL, resulting in a relative improvement of $\textbf{8.6}$% within the same computational budget. Code is available at https://github.com/Fanziyang-v/FlashVID.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Accelerates video LLMs via training-free spatiotemporal token merging, retaining 99.1% performance with 10% of tokens.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Proposes Attention and Diversity-based Token Selection for representative token filtering
  • Introduces Tree-based Spatiotemporal Token Merging for fine-grained redundancy elimination
  • Achieves 10x increase in video frames for Qwen2.5-VL within same computational budget
  • Training-free and plug-and-play module compatible with multiple VLLMs
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Token merging
  • Attention-based selection
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • Efficient Large Multimodal Models
  • Video Large Language Models
  • Visual Token Compression

Related orals

Something off? Let us know →