ICLR 2026 Orals

On the Wasserstein Geodesic Principal Component Analysis of probability measures

Nina Vesseron, Elsa Cazelles, Alice Le Brigant, Klein

Datasets, Benchmarks & Evaluation Thu, Apr 23 · 11:06 AM–11:16 AM · 204 A/B Avg rating: 7.00 (4–10)

Abstract

This paper focuses on Geodesic Principal Component Analysis (GPCA) on a collection of probability distributions using the Otto-Wasserstein geometry. The goal is to identify geodesic curves in the space of probability measures that best capture the modes of variation of the underlying dataset. We first address the case of a collection of Gaussian distributions, and show how to lift the computations in the space of invertible linear maps. For the more general setting of absolutely continuous probability measures, we leverage a novel approach to parameterizing geodesics in Wasserstein space with neural networks. Finally, we compare to classical tangent PCA through various examples and provide illustrations on real-world datasets.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Geodesic PCA for probability distributions using Wasserstein geometry with neural network parametrization for continuous distributions.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Method for exact GPCA on Gaussian distributions via lift computations in space of invertible linear maps
  • Novel neural network approach to parameterize geodesics in Wasserstein space for absolutely continuous distributions
  • Sampling capability from any point along geodesic components without empirical approximations
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Geodesic Principal Component Analysis
  • Wasserstein geometry
  • neural network parametrization
  • optimal transport
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • GPCA and TPCA yield similar results for most Gaussian distributions except those with covariance matrices near SPD cone boundary
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Develop more fundamental theories to explain Intrinsic Entropy measurements
    from the paper
  • Explore convex function parametrization without imposing hard architectural constraints
    from the paper

Author keywords

  • wasserstein PCA
  • optimal transport
  • deep learning

Related orals

Something off? Let us know →