ICLR 2026 Orals

DiffusionNFT: Online Diffusion Reinforcement with Forward Process

Kaiwen Zheng, Huayu Chen, Haotian Ye, Haoxiang Wang, Qinsheng Zhang, Kai Jiang, Hang Su, Stefano Ermon, Jun Zhu, Ming-Yu Liu

Reinforcement Learning & Agents Thu, Apr 23 · 11:06 AM–11:16 AM · 201 A/B Avg rating: 7.33 (6–8)
Author-provided TL;DR

We propose a new online reinforcement learning (RL) algorithm for diffusion and flow models based on forward process.

Abstract

Online reinforcement learning (RL) has been central to post-training language models, but its extension to diffusion models remains challenging due to intractable likelihoods. Recent works discretize the reverse sampling process to enable GRPO-style training, yet they inherit fundamental drawbacks, including solver restrictions, forward–reverse inconsistency, and complicated integration with classifier-free guidance (CFG). We introduce Diffusion Negative-aware FineTuning (DiffusionNFT), a new online RL paradigm that optimizes diffusion models directly on the forward process via flow matching. DiffusionNFT contrasts positive and negative generations to define an implicit policy improvement direction, naturally incorporating reinforcement signals into the supervised learning objective. This formulation enables training with arbitrary black-box solvers, eliminates the need for likelihood estimation, and requires only clean images rather than sampling trajectories for policy optimization. DiffusionNFT is up to $25\times$ more efficient than FlowGRPO in head-to-head comparisons, while being CFG-free. For instance, DiffusionNFT improves the GenEval score from 0.24 to 0.98 within 1k steps, while FlowGRPO achieves 0.95 with over 5k steps and additional CFG employment. By leveraging multiple reward models, DiffusionNFT significantly boosts the performance of SD3.5-Medium in every benchmark tested.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

DiffusionNFT enables efficient online reinforcement learning for diffusion models via forward process optimization with up to 25x efficiency gains.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • DiffusionNFT paradigm for online RL on diffusion models using forward process
  • Eliminates likelihood estimation and reverse process solver restrictions
  • Achieves 25x higher efficiency than FlowGRPO while outperforming CFG baselines
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Diffusion models
  • Online reinforcement learning
  • Flow matching
  • Reward optimization
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • Diffusion Models
  • Reinforcement Learning
  • Flow Matching

Related orals

Something off? Let us know →