Navigating the Latent Space Dynamics of Neural Models
Marco Fumero, Luca Moschella, Emanuele Rodolà, Francesco Locatello
Abstract
Neural networks transform high-dimensional data into compact, structured representations, often modeled as elements of a lower dimensional latent space. In this paper, we present an alternative interpretation of neural models as dynamical systems acting on the latent manifold. Specifically, we show that autoencoder models implicitly define a _latent vector field_ on the manifold, derived by iteratively applying the encoding-decoding map, without any additional training. We observe that standard training procedures introduce inductive biases that lead to the emergence of attractor points within this vector field. Drawing on this insight, we propose to leverage the vector field as a _representation_ for the network, providing a novel tool to analyze the properties of the model and the data. This representation enables to: $(i)$ analyze the generalization and memorization regimes of neural models, even throughout training; $(ii)$ extract prior knowledge encoded in the network's parameters from the attractors, without requiring any input data; $(iii)$ identify out-of-distribution samples from their trajectories in the vector field. We further validate our approach on vision foundation models, showcasing the applicability and effectiveness of our method in real-world scenarios.
Interprets neural autoencoders as dynamical systems with latent vector fields to analyze generalization, memorization, and out-of-distribution detection.
- Represents autoencoders as vector fields defined by iterating encoding-decoding map in latent space
- Shows attractors in latent vector fields link to memorization and generalization regimes
- Enables extracting prior knowledge from network parameters without input data
- Autoencoder analysis
- Dynamical systems
- Latent space geometry
- Foundation models
Cannot directly generalize equation to discriminative models or self-supervised models with non-invertible networks
from the paperFramework primarily suited for autoencoders; extending to other architectures requires modifications
from the paper
Characterize how attractors form during training and conditions for noise attractors to converge
from the paperStudy alignment of latent vector fields across networks trained on same data
from the paperApply to discriminative and self-supervised models with surrogate autoencoders
from the paper
Author keywords
- Representation learning
- latent vector field
- autoencoders
- memorization and generalization
- attractor
Related orals
Improving Diffusion Models for Class-imbalanced Training Data via Capacity Manipulation
Capacity manipulation improves diffusion models' handling of class-imbalanced data by reserving capacity for minority classes via low-rank decomposition.
Depth Anything 3: Recovering the Visual Space from Any Views
DA3 predicts spatially consistent 3D geometry from arbitrary camera views using plain transformer and depth-ray targets.
Text-to-3D by Stitching a Multi-view Reconstruction Network to a Video Generator
VIST3A stitches text-to-video models with 3D reconstruction systems and aligns them via reward finetuning for high-quality text-to-3D generation.
Radiometrically Consistent Gaussian Surfels for Inverse Rendering
RadioGS introduces radiometric consistency supervision for inverse rendering to accurately model indirect illumination in Gaussian-based representations.
True Self-Supervised Novel View Synthesis is Transferable
Presents XFactor, first geometry-free self-supervised model for transferable novel view synthesis without 3D inductive biases.