ICLR 2026 Orals

EditVerse: Unifying Image and Video Editing and Generation with In-Context Learning

Xuan Ju, Tianyu Wang, Yuqian Zhou, He Zhang, Qing Liu, Nanxuan Zhao, Zhifei Zhang, Yijun Li, Yuanhao Cai, Shaoteng Liu, Daniil Pakhomov, Zhe Lin, Soo Ye Kim, Qiang Xu

Multimodal & Speech Sat, Apr 25 · 11:18 AM–11:28 AM · 201 A/B Avg rating: 5.60 (4–8)

Abstract

Recent advances in foundation models highlight a clear trend toward unification and scaling, showing emergent capabilities across diverse domains. While image generation and editing have rapidly transitioned from task-specific to unified frameworks, video generation and editing remain fragmented due to architectural limitations and data scarcity. In this work, we introduce EditVerse, a unified framework for image and video generation and editing within a single model. By representing all modalities, i.e., text, image, and video, as a unified token sequence, EditVerse leverages self-attention to achieve robust in-context learning, natural cross-modal knowledge transfer, and flexible handling of inputs and outputs with arbitrary resolutions and durations. To address the lack of video editing training data, we design a scalable data pipeline that curates 232K video editing samples and combines them with large-scale image and video datasets for joint training. Furthermore, we present EditVerseBench, the first benchmark for instruction-based video editing covering diverse tasks and resolutions. Extensive experiments and user studies demonstrate that EditVerse achieves state-of-the-art performance, surpassing existing open-source and commercial models, while exhibiting emergent editing and generation abilities across modalities.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

PAPL aligns discrete diffusion training with planning-based inference via planned ELBO for improved text and protein generation.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Derives Planned Evidence Lower Bound (P-ELBO) incorporating planner-based reverse dynamics directly into training
  • Simple single-line code modification to standard masked diffusion loss enabling easy adoption
  • Consistent improvements across domains with 40% relative gain in protein foldability and 4x MAUVE improvement in text
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Discrete diffusion models
  • Planning-aware training
  • Evidence lower bound optimization
  • Denoising networks
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Post-training necessary to test whether decoding strategies benefit from planner-aware loss making methodology expensive with large planning models
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Extend analysis to other unmasking schemes like top probability margin and confidence thresholding
    from the paper
  • Find computationally viable loss for greedy ancestral sampling analogous to PAPL loss
    from the paper

Author keywords

  • Video Editing
  • Content Generation
  • Artificial Intelligence

Related orals

Something off? Let us know →