ICLR 2026 Orals

Fast training of accurate physics-informed neural networks without gradient descent

Chinmay Datar, Taniya Kapoor, Abhishek Chandra, Qing Sun, Erik Lien Bolager, Iryna Burak, Anna Veselovska, Massimo Fornasier, Felix Dietrich

Theory & Optimization Sat, Apr 25 · 4:15 PM–4:25 PM · 201 C Avg rating: 7.00 (4–8)
Author-provided TL;DR

Our approach - Frozen-PINNs addresses longstanding training and accuracy bottlenecks of Physics-Informed Neural Networks (PINNs) and makes PINNs highly realize high-precision, temporal causality, and extremely fast training.

Abstract

Solving time-dependent Partial Differential Equations (PDEs) is one of the most critical problems in computational science. While Physics-Informed Neural Networks (PINNs) offer a promising framework for approximating PDE solutions, their accuracy and training speed are limited by two core barriers: gradient-descent-based iterative optimization over complex loss landscapes and non-causal treatment of time as an extra spatial dimension. We present Frozen-PINN, a novel PINN based on the principle of space-time separation that leverages random features instead of training with gradient descent, and incorporates temporal causality by construction. On nine PDE benchmarks, including challenges like extreme advection speeds, shocks, and high-dimensionality, Frozen-PINNs achieve superior training efficiency and accuracy over state-of-the-art PINNs, often by several orders of magnitude. Our work addresses longstanding training and accuracy bottlenecks of PINNs, delivering quickly trainable, highly accurate, and inherently causal PDE solvers, a combination that prior methods could not realize. Our approach challenges the reliance of PINNs on stochastic gradient-descent-based methods and specialized hardware, leading to a paradigm shift in PINN training and providing a challenging benchmark for the community.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Frozen-PINNs employ space-time separation with random features for fast, accurate PDE solving without gradient descent.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Space-time separation principle avoiding iterative optimization over complex loss landscapes
  • Random features approach eliminating need for gradient descent during training
  • Incorporates temporal causality by construction through space-time separation
  • Several orders of magnitude improvement over SoTA PINNs on diverse PDE benchmarks
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Physics-informed neural networks
  • Space-time separation
  • Random features
  • PDE solving
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • physics-informed neural networks
  • extreme learning machines
  • random features
  • partial differential equations
  • optimization
  • training
  • causality
  • neural PDE solvers
  • optimization

Related orals

Something off? Let us know →