ICLR 2026 Orals

Latent Particle World Models: Self-supervised Object-centric Stochastic Dynamics Modeling

Tal Daniel, Carl Qi, Dan Haramati, Amir Zadeh, Chuan Li, Aviv Tamar, Deepak Pathak, David Held

Reinforcement Learning & Agents Fri, Apr 24 · 3:27 PM–3:37 PM · 201 A/B Avg rating: 7.33 (6–8)
Author-provided TL;DR

a self-supervised object-centric world model that learns keypoints, and masks directly from videos, supports multi-modal conditioning, scaled to real-world multi-object datasets

Abstract

We introduce Latent Particle World Model (LPWM), a self-supervised object-centric world model scaled to real-world multi-object datasets and applicable in decision-making. LPWM autonomously discovers keypoints, bounding boxes, and object masks directly from video data, enabling it to learn rich scene decompositions without supervision. Our architecture is trained end-to-end purely from videos and supports flexible conditioning on actions, language, and image goals. LPWM models stochastic particle dynamics via a novel latent action module and achieves state-of-the-art results on diverse real-world and synthetic datasets. Beyond stochastic video modeling, LPWM is readily applicable to decision-making, including goal-conditioned imitation learning, as we demonstrate in the paper. Code, data, pre-trained models and video rollouts are available: https://taldatech.github.io/lpwm-web

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

LPWM enables self-supervised object-centric world modeling with latent action module for stochastic video generation and control.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Autonomous discovery of keypoints, bounding boxes and object masks directly from video without supervision
  • Latent action module enabling flexible conditioning on actions, language and image goals for controllable generation
  • State-of-the-art results on real-world and synthetic datasets with applicability to goal-conditioned imitation learning
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Self-supervised learning
  • Object-centric representation
  • Latent action modeling
  • End-to-end video modeling
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Currently depends on datasets with small camera motion and recurring scenarios such as robotics or video games
    from the paper
  • Not yet applicable to general-purpose large-scale video data
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Scale to diverse datasets beyond robotics and video games
    from the paper
  • Enable unified multi-modal conditioning with simultaneous action, language and image signals
    from the paper
  • Integrate explicit reward modeling for reinforcement learning
    from the paper

Author keywords

  • World Model
  • Self-supervised
  • unsupervised
  • object-centric
  • video prediciton
  • video generation
  • imitation learning
  • latent particles
  • vae

Related orals

Something off? Let us know →