FALCON: Few-step Accurate Likelihoods for Continuous Flows
Danyal Rehman, Tara Akhound-Sadegh, Artem Gazizov, Yoshua Bengio, Alexander Tong
Few-step Flow Matching with Accurate Likelihoods for Scalable Boltzmann Generators
Abstract
Scalable sampling of molecular states in thermodynamic equilibrium is a long-standing challenge in statistical physics. Boltzmann Generators tackle this problem by pairing a generative model, capable of exact likelihood computation, with importance sampling to obtain consistent samples under the target distribution. Current Boltzmann Generators primarily use continuous normalizing flows (CNFs) trained with flow matching for efficient training of powerful models. However, likelihood calculation for these models is extremely costly, requiring thousands of function evaluations per sample, severely limiting their adoption. In this work, we propose Few-Step Accurate Likelihoods for Continuous Flows (FALCON), a method which allows for few-step sampling with a likelihood accurate enough for importance sampling applications by introducing a hybrid training objective that encourages invertibility. We show FALCON outperforms state-of-the-art normalizing flow models for molecular Boltzmann sampling and is two orders of magnitude faster than the equivalently performing CNF model. FALCON code is available at: https://github.com/danyalrehman/FALCON.
Triple-BERT addresses order dispatching via centralized SARL with action decomposition and BERT-based attention.
- First centralized Single Agent Reinforcement Learning method for large-scale ride-hailing order dispatching
- Action decomposition strategy breaking down joint action probability into individual driver probabilities
- BERT-based network with parameter reuse capturing complex driver-order relationships with reduced parameters
- Reinforcement learning
- BERT attention mechanism
- Action decomposition
- Parameter reuse
- Manhattan ride-hailing real-world dataset
More sensitive to single points of failure compared to traditional MARL methods, decisions depend on comprehensive information from all drivers and orders
from the paper
Explore importance sampling within off-policy policy gradient-based actor optimization
from the paperInvestigate offline training to replace pre-training phase
from the paperIdentify more efficient SARL frameworks or enhancements to existing method
from the paper
Author keywords
- Generative Models
- Flow Matching
- Boltzmann Generators
- AI for Science
Related orals
Universal Inverse Distillation for Matching Models with Real-Data Supervision (No GANs)
RealUID provides universal distillation for matching models without GANs, incorporating real data into one-step generator training.
GLASS Flows: Efficient Inference for Reward Alignment of Flow and Diffusion Models
GLASS Flows samples Markov transitions via inner flow matching models to improve inference-time reward alignment in flow and diffusion models.
Neon: Negative Extrapolation From Self-Training Improves Image Generation
Neon inverts model degradation from self-training by extrapolating away from it, improving generative models with minimal compute.
Generative Human Geometry Distribution
Introduces distribution-over-distribution model combining geometry distributions with two-stage flow matching for human 3D generation.
Cross-Domain Lossy Compression via Rate- and Classification-Constrained Optimal Transport
Cross-domain lossy compression unifies rate and classification constraints via optimal transport framework.