ICLR 2026 Orals

The Polar Express: Optimal Matrix Sign Methods and their Application to the Muon Algorithm

Noah Amsel, David Persson, Christopher Musco, Robert M. Gower

LLMs & Reasoning Fri, Apr 24 · 10:30 AM–10:40 AM · 204 A/B Avg rating: 8.00 (6–10)
Author-provided TL;DR

We introduce a GPU-friendly algorithm for computing the polar decomposition of a matrix to low accuracy that is optimal in its class. This improves Muon.

Abstract

Computing the polar decomposition and the related matrix sign function has been a well-studied problem in numerical analysis for decades. Recently, it has emerged as an important subroutine within the Muon algorithm for training deep neural networks. However, the requirements of this application differ sharply from classical settings: deep learning demands GPU-friendly algorithms that prioritize high throughput over high precision. We introduce *Polar Express*, a new method for computing the polar decomposition. Like Newton–Schulz and other classical polynomial methods, our approach uses only matrix-matrix multiplications, making it very efficient on GPUs. Inspired by earlier work of Chen \& Chow and Nakatsukasa \& Freund, *Polar Express* adapts the update rule at each iteration by solving a minimax optimization problem. We prove that this strategy minimizes error in a worst-case sense, allowing *Polar Express* to converge as rapidly as possible both in the early iterations and asymptotically. We also address finite-precision issues, making it practical to use in `bfloat16`. When integrated into Muon, our method yields consistent improvements in validation loss for a GPT-2 model on one to ten billion tokens from the FineWeb dataset, outperforming recent alternatives across a range of learning rates.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Polar Express computes polar decomposition with minimax-optimized update rules for efficient GPU-friendly training.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Minimax optimization strategy adapting update rules at each iteration for worst-case error minimization
  • Convergence as rapidly as possible both early and asymptotically in early iterations and finite-precision
  • Practical implementation in bfloat16 yielding consistent improvements for GPT-2 training with Muon optimizer
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Matrix sign function computation
  • Polynomial iteration methods
  • Minimax optimization
  • Muon optimization algorithm
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • FineWeb
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • polar decomposition
  • matrix sign
  • numerical linear algebra
  • muon
  • optimization
  • approximation theory

Related orals

Something off? Let us know →