DCFold: Efficient Protein Structure Generation with Single Forward Pass
Zhe Zhang, Yuanning Feng, Yuxuan Song, Keyue Qiu, Hao Zhou, Wei-Ying Ma
Abstract
AlphaFold3 introduces a diffusion-based architecture that elevates protein structure prediction to all-atom resolution with improved accuracy. This state-of-the-art performance has established AlphaFold3 as a foundation model for diverse generation and design tasks. However, its iterative design substantially increases inference time, limiting practical deployment in downstream settings such as virtual screening and protein design. We propose DCFold, a single-step generative model that attains AlphaFold3-level accuracy. Our Dual Consistency training framework, which incorporates a novel Temporal Geodesic Matching (TGM) scheduler, enables DCFold to achieve a 15× acceleration in inference while maintaining predictive fidelity. We validate its effectiveness across both structure prediction and binder design benchmarks.
Distills AlphaFold3 into single-step sampler with temporal geodesic matching achieving 15x inference acceleration.
- Develops dual-consistency distillation framework compressing AlphaFold3 into high-fidelity single-step sampler
- Introduces Temporal Geodesic Matching scheduler enabling stable training on variable-length sequences
- Achieves 15x acceleration in inference while maintaining AlphaFold3 accuracy
- Matches or surpasses AlphaFold3 on structure prediction and binder design benchmarks
- Knowledge distillation
- Diffusion models
- Protein structure prediction
- Protein Data Bank
Authors did not state explicit limitations.
Authors did not state explicit future directions.
Author keywords
- consistency model
- protein structure generation
Related orals
Universal Inverse Distillation for Matching Models with Real-Data Supervision (No GANs)
RealUID provides universal distillation for matching models without GANs, incorporating real data into one-step generator training.
GLASS Flows: Efficient Inference for Reward Alignment of Flow and Diffusion Models
GLASS Flows samples Markov transitions via inner flow matching models to improve inference-time reward alignment in flow and diffusion models.
Neon: Negative Extrapolation From Self-Training Improves Image Generation
Neon inverts model degradation from self-training by extrapolating away from it, improving generative models with minimal compute.
Generative Human Geometry Distribution
Introduces distribution-over-distribution model combining geometry distributions with two-stage flow matching for human 3D generation.
Cross-Domain Lossy Compression via Rate- and Classification-Constrained Optimal Transport
Cross-domain lossy compression unifies rate and classification constraints via optimal transport framework.