Universal Inverse Distillation for Matching Models with Real-Data Supervision (No GANs)
Nikita Maksimovich Kornilov, David Li, Tikhon Mavrin, Aleksei Leonov, Nikita Gushchin, Evgeny Burnaev, Iaroslav Sergeevich Koshelev, Alexander Korotin
Abstract
While achieving exceptional generative quality, modern diffusion, flow, and other matching models suffer from slow inference, as they require many steps of iterative generation. Recent distillation methods address this by training efficient one-step generators under the guidance of a pre-trained teacher model. However, these methods are often constrained to only one specific framework, e.g., only to diffusion or only to flow models. Furthermore, these methods are naturally data-free, and to benefit from the usage of real data, it is required to use an additional complex adversarial training with an extra discriminator model. In this paper, we present **RealUID**, a universal distillation framework for all matching models that seamlessly incorporates real data into the distillation procedure without GANs. Our **RealUID** approach offers a simple theoretical foundation that covers previous distillation methods for Flow Matching and Diffusion models, and is also extended to their modifications, such as Bridge Matching and Stochastic Interpolants. The code can be found in https://github.com/David-cripto/RealUID.
RealUID provides universal distillation for matching models without GANs, incorporating real data into one-step generator training.
- Universal distillation framework applicable to diffusion, flow, bridge matching and stochastic interpolant models
- Seamless incorporation of real data into distillation without requiring discriminator networks
- Simple theoretical foundation unifying previous distillation methods and extending to model variants
- Knowledge distillation
- Flow matching
- Diffusion models
- Inverse distillation
- CIFAR-10
- CelebA
Authors did not state explicit limitations.
Authors did not state explicit future directions.
Author keywords
- Diffusion models
- Flow Matching
- Acceleration of diffusion/flow models
- Distillation of diffusion/flow models
Related orals
GLASS Flows: Efficient Inference for Reward Alignment of Flow and Diffusion Models
GLASS Flows samples Markov transitions via inner flow matching models to improve inference-time reward alignment in flow and diffusion models.
Neon: Negative Extrapolation From Self-Training Improves Image Generation
Neon inverts model degradation from self-training by extrapolating away from it, improving generative models with minimal compute.
Generative Human Geometry Distribution
Introduces distribution-over-distribution model combining geometry distributions with two-stage flow matching for human 3D generation.
Cross-Domain Lossy Compression via Rate- and Classification-Constrained Optimal Transport
Cross-domain lossy compression unifies rate and classification constraints via optimal transport framework.
NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
NextStep-1 achieves state-of-the-art autoregressive text-to-image generation by modeling continuous image tokens with lightweight flow matching instead of diffusion.