ICLR 2026 Orals

Navigating the Latent Space Dynamics of Neural Models

Marco Fumero, Luca Moschella, Emanuele Rodolà, Francesco Locatello

Vision & 3D Sat, Apr 25 · 11:06 AM–11:16 AM · 203 A/B Avg rating: 6.50 (6–8)

Abstract

Neural networks transform high-dimensional data into compact, structured representations, often modeled as elements of a lower dimensional latent space. In this paper, we present an alternative interpretation of neural models as dynamical systems acting on the latent manifold. Specifically, we show that autoencoder models implicitly define a _latent vector field_ on the manifold, derived by iteratively applying the encoding-decoding map, without any additional training. We observe that standard training procedures introduce inductive biases that lead to the emergence of attractor points within this vector field. Drawing on this insight, we propose to leverage the vector field as a _representation_ for the network, providing a novel tool to analyze the properties of the model and the data. This representation enables to: $(i)$ analyze the generalization and memorization regimes of neural models, even throughout training; $(ii)$ extract prior knowledge encoded in the network's parameters from the attractors, without requiring any input data; $(iii)$ identify out-of-distribution samples from their trajectories in the vector field. We further validate our approach on vision foundation models, showcasing the applicability and effectiveness of our method in real-world scenarios.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Interprets neural autoencoders as dynamical systems with latent vector fields to analyze generalization, memorization, and out-of-distribution detection.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Represents autoencoders as vector fields defined by iterating encoding-decoding map in latent space
  • Shows attractors in latent vector fields link to memorization and generalization regimes
  • Enables extracting prior knowledge from network parameters without input data
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Autoencoder analysis
  • Dynamical systems
  • Latent space geometry
  • Foundation models
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Cannot directly generalize equation to discriminative models or self-supervised models with non-invertible networks
    from the paper
  • Framework primarily suited for autoencoders; extending to other architectures requires modifications
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Characterize how attractors form during training and conditions for noise attractors to converge
    from the paper
  • Study alignment of latent vector fields across networks trained on same data
    from the paper
  • Apply to discriminative and self-supervised models with surrogate autoencoders
    from the paper

Author keywords

  • Representation learning
  • latent vector field
  • autoencoders
  • memorization and generalization
  • attractor

Related orals

Something off? Let us know →