ICLR 2026 Orals

MotionStream: Real-Time Video Generation with Interactive Motion Controls

Joonghyuk Shin, Zhengqi Li, Richard Zhang, Jun-Yan Zhu, Jaesik Park, Eli Shechtman, Xun Huang

Efficiency, Systems & Kernels Sat, Apr 25 · 11:06 AM–11:16 AM · 201 A/B Avg rating: 5.50 (4–6)
Author-provided TL;DR

We present MotionStream, a streaming (real-time, infinite length) video generation system with motion controls, unlocking new possibilities for interactive content creation.

Abstract

Current motion-conditioned video generation methods suffer from prohibitive latency (minutes per video) and non-causal processing that prevents real-time interaction. We present MotionStream, enabling sub-second latency with up to 29 FPS streaming generation on a single GPU. Our approach begins by augmenting a text-to-video model with motion control, which generates high-quality videos that adhere to the global text prompt and local motion guidance, but does not perform inference on the fly. As such, we distill this bidirectional teacher into a causal student through Self Forcing with Distribution Matching Distillation, enabling real-time streaming inference. Several key challenges arise when generating videos of long, potentially infinite time-horizons -- (1) bridging the domain gap from training on finite length and extrapolating to infinite horizons, (2) sustaining high quality by preventing error accumulation, and (3) maintaining fast inference, without incurring growth in computational cost due to increasing context windows. A key to our approach is introducing carefully designed sliding-window causal attention, combined with attention sinks. By incorporating self-rollout with attention sinks and KV cache rolling during training, we properly simulate inference-time extrapolations with a fixed context window, enabling constant-speed generation of arbitrarily long videos. Our models achieve state-of-the-art results in motion following and video quality while being two orders of magnitude faster, uniquely enabling infinite-length streaming. With MotionStream, users can paint trajectories, control cameras, or transfer motion, and see results unfold in real-time, delivering a truly interactive experience.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Introduces MotionStream enabling sub-second latency motion-controlled infinite-length video generation via causal diffusion.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Augments text-to-video model with motion control maintaining adherence to text and local motion guidance
  • Self Forcing with Distribution Matching Distillation distilling bidirectional teacher into causal student
  • Sliding-window causal attention combined with attention sinks enabling constant-speed infinite-horizon generation
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Diffusion models
  • Knowledge distillation
  • Causal attention
  • Video generation
  • Attention mechanisms
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Fixed attention sinks mechanism constrains ability to handle scenarios with complete scene changes
    from the paper
  • Artifacts observed when motion trajectories extremely rapid or physically implausible
    from the paper
  • Struggles preserving source details when scenes, text prompts, or intended motions highly complex
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Explore dynamic attention sinking strategies adaptively refreshing anchor frames for world modeling
    from the paper
  • Investigate effective track augmentation strategies during training to simulate imperfect user inputs
    from the paper
  • Scale to larger backbone models for improved robustness and visual quality
    from the paper

Author keywords

  • Interactive Video Generation
  • Motion Control
  • Real-Time Generation
  • Causal Generation

Related orals

Something off? Let us know →