ICLR 2026 Orals

Mean Flow Policy with Instantaneous Velocity Constraint for One-step Action Generation

Guojian Zhan, Letian Tao, Pengcheng Wang, Yixiao Wang, Yuxin Chen, Yiheng Li, Hongyang Li, Masayoshi Tomizuka, Shengbo Eben Li

Reinforcement Learning & Agents Fri, Apr 24 · 3:51 PM–4:01 PM · 201 A/B Avg rating: 7.00 (4–8)
Author-provided TL;DR

We introduce the mean velocity policy, a new RL policy that, along with a novel instantaneous velocity constraint, achieves state-of-the-art performance and the fastest training and inference speed.

Abstract

Learning expressive and efficient policy functions is a promising direction in reinforcement learning (RL). While flow-based policies have recently proven effective in modeling complex action distributions with a fast deterministic sampling process, they still face a trade-off between expressiveness and computational burden, which is typically controlled by the number of flow steps. In this work, we propose mean velocity policy (MVP), a new generative policy function that models the mean velocity field to achieve the fastest one-step action generation. To ensure its high expressiveness, an instantaneous velocity constraint (IVC) is introduced on the mean velocity field during training. We theoretically prove that this design explicitly serves as a crucial boundary condition, thereby improving learning accuracy and enhancing policy expressiveness. Empirically, our MVP achieves state-of-the-art success rates across several challenging robotic manipulation tasks from Robomimic and OGBench. It also delivers substantial improvements in training and inference speed over existing flow-based policy baselines.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

MVP achieves fastest one-step action generation with instantaneous velocity constraint providing high expressiveness for robotic control.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Proposes mean velocity policy (MVP) modeling mean velocity field for fastest one-step action generation
  • Introduces instantaneous velocity constraint (IVC) improving learning accuracy and enhancing policy expressiveness
  • Theoretically proves IVC serves as crucial boundary condition improving expressiveness
  • Achieves state-of-the-art success rates on Robomimic and OGBench with substantial training and inference speedup
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Flow-based policies
  • Generative models
  • Reinforcement learning
  • Velocity constraints
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Robomimic
  • OGBench
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Primary limitation is additional GPU memory consumption during training from Jacobian-Vector Product (JVP) operation
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Validate method on more robotic tasks and real robotic platforms
    from the paper

Author keywords

  • Reinforcement learning
  • Generative policy

Related orals

Something off? Let us know →