Fast training of accurate physics-informed neural networks without gradient descent
Chinmay Datar, Taniya Kapoor, Abhishek Chandra, Qing Sun, Erik Lien Bolager, Iryna Burak, Anna Veselovska, Massimo Fornasier, Felix Dietrich
Our approach - Frozen-PINNs addresses longstanding training and accuracy bottlenecks of Physics-Informed Neural Networks (PINNs) and makes PINNs highly realize high-precision, temporal causality, and extremely fast training.
Abstract
Solving time-dependent Partial Differential Equations (PDEs) is one of the most critical problems in computational science. While Physics-Informed Neural Networks (PINNs) offer a promising framework for approximating PDE solutions, their accuracy and training speed are limited by two core barriers: gradient-descent-based iterative optimization over complex loss landscapes and non-causal treatment of time as an extra spatial dimension. We present Frozen-PINN, a novel PINN based on the principle of space-time separation that leverages random features instead of training with gradient descent, and incorporates temporal causality by construction. On nine PDE benchmarks, including challenges like extreme advection speeds, shocks, and high-dimensionality, Frozen-PINNs achieve superior training efficiency and accuracy over state-of-the-art PINNs, often by several orders of magnitude. Our work addresses longstanding training and accuracy bottlenecks of PINNs, delivering quickly trainable, highly accurate, and inherently causal PDE solvers, a combination that prior methods could not realize. Our approach challenges the reliance of PINNs on stochastic gradient-descent-based methods and specialized hardware, leading to a paradigm shift in PINN training and providing a challenging benchmark for the community.
Frozen-PINNs employ space-time separation with random features for fast, accurate PDE solving without gradient descent.
- Space-time separation principle avoiding iterative optimization over complex loss landscapes
- Random features approach eliminating need for gradient descent during training
- Incorporates temporal causality by construction through space-time separation
- Several orders of magnitude improvement over SoTA PINNs on diverse PDE benchmarks
- Physics-informed neural networks
- Space-time separation
- Random features
- PDE solving
Authors did not state explicit limitations.
Authors did not state explicit future directions.
Author keywords
- physics-informed neural networks
- extreme learning machines
- random features
- partial differential equations
- optimization
- training
- causality
- neural PDE solvers
- optimization
Related orals
On The Surprising Effectiveness of a Single Global Merging in Decentralized Learning
Shows decentralized learning with single global merging achieves convergence rates matching parallel SGD under data heterogeneity.
Non-Convex Federated Optimization under Cost-Aware Client Selection
Develops efficient federated optimization algorithm with cost-aware client selection achieving best communication and local complexity.
Fast Escape, Slow Convergence: Learning Dynamics of Phase Retrieval under Power-Law Data
Analyzes phase retrieval learning dynamics with anisotropic data, deriving explicit scaling laws and three-phase trajectories.
A Representer Theorem for Hawkes Processes via Penalized Least Squares Minimization
Representer theorem for Hawkes processes shows dual coefficients are analytically fixed to unity via penalized least squares.
Quantitative Bounds for Length Generalization in Transformers
Quantitative bounds show training length required for length generalization depends on periodicity, locality, alphabet size, and model norms.