ICLR 2026 Orals

Neon: Negative Extrapolation From Self-Training Improves Image Generation

Sina Alemohammad, Zhangyang Wang, Richard Baraniuk

Diffusion & Flow Matching Thu, Apr 23 · 11:42 AM–11:52 AM · 201 A/B Avg rating: 7.00 (6–8)
Author-provided TL;DR

Instead of simply fine-tuning a generative model on its own synthetic outputs, briefly fine-tune it to find the direction of model collapse, then apply the reverse of that update to the original model for a major performance boost.

Abstract

Scaling generative AI models is bottlenecked by the scarcity of high-quality training data. The ease of synthesizing from a generative model suggests using (unverified) synthetic data to augment a limited corpus of real data for the purpose of fine-tuning in the hope of improving performance. Unfortunately, however, the resulting positive feedback loop leads to model autophagy disorder (MAD, aka model collapse) that results in a rapid degradation in sample quality and/or diversity. In this paper, we introduce Neon (for Negative Extrapolation frOm self-traiNing), a new learning method that turns the degradation from self-training into a powerful signal for self-improvement. Given a base model, Neon first fine-tunes it on its own self-synthesized data but then, counterintuitively, reverses its gradient updates to extrapolate away from the degraded weights. We prove that Neon works because typical inference samplers that favor high-probability regions create a predictable anti-alignment between the synthetic and real data population gradients, which negative extrapolation corrects to better align the model with the true data distribution. Neon is remarkably easy to implement via a simple post-hoc merge that requires no new real data, works effectively with as few as 1k synthetic samples, and typically uses less than 1\% additional training compute. We demonstrate Neon’s universality across a range of architectures (diffusion, flow matching, autoregressive, and inductive moment matching models) and datasets (ImageNet, CIFAR-10, and FFHQ). In particular, on ImageNet 256x256, Neon elevates the xAR-L model to a new state-of-the-art FID of 1.02 with only 0.36\% additional training compute.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Neon inverts model degradation from self-training by extrapolating away from it, improving generative models with minimal compute.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Introduces Neon, a post-hoc method that reverses gradient updates from self-training to counter model autophagy
  • Proves that mode-seeking samplers create predictable anti-alignment between synthetic and real data gradients
  • Shows negative extrapolation corrects sampler bias and enhances recall and generation fidelity
  • Achieves state-of-the-art FID of 1.02 on ImageNet 256x256 with only 0.36% additional compute
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Gradient extrapolation
  • Self-training
  • Diffusion models
  • Flow matching
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • ImageNet 256x256
  • CIFAR-10
  • FFHQ
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Identify diversity-promoting samplers enabling positive alignment between synthetic and real data for direct self-improvement
    from the paper
  • Actively synthesize optimal bad datasets eliciting degradation direction maximizing corrective signal
    from the paper

Author keywords

  • Generative Models
  • Self-Improvement
  • Weight Merging
  • Image Generation

Related orals

Generative Human Geometry Distribution

Introduces distribution-over-distribution model combining geometry distributions with two-stage flow matching for human 3D generation.

Avg rating: 5.50 (2–8) · Xiangjun Tang et al.
Something off? Let us know →