ICLR 2026 Orals

Cross-Domain Lossy Compression via Rate- and Classification-Constrained Optimal Transport

Nam Nguyen, Thinh Nguyen, Bella Bose

Diffusion & Flow Matching Thu, Apr 23 · 4:27 PM–4:37 PM · 201 C Avg rating: 6.00 (2–10)
Author-provided TL;DR

We study cross-domain lossy compression via constrained optimal transport with rate and classification constraints, derive closed-form tradeoffs, extend to perception divergences, and validate with deep restoration and inpainting experiments.

Abstract

We study cross-domain lossy compression, where the encoder observes a degraded source while the decoder reconstructs samples from a distinct target distribution. The problem is formulated as constrained optimal transport with two constraints on compression rate and classification loss. With shared common randomness, the one-shot setting reduces to a deterministic transport plan, and we derive closed-form distortion-rate-classification (DRC) and rate-distortion-classification (RDC) tradeoffs for Bernoulli sources under Hamming distortion. In the asymptotic regime, we establish analytic DRC/RDC expressions for Gaussian models under mean-squared error. The framework is further extended to incorporate perception divergences (Kullback-Leibler and squared Wasserstein), yielding closed-form distortion-rate-perception-classification (DRPC) functions. To validate the theory, we develop deep end-to-end compression models for super-resolution (MNIST), denoising (SVHN, CIFAR-10, ImageNet, KODAK), and inpainting (SVHN) problems, demonstrating the consistency between the theoretical results and empirical performance.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Cross-domain lossy compression unifies rate and classification constraints via optimal transport framework.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Formulates cross-domain compression as constrained optimal transport with rate and classification constraints
  • Derives closed-form DRC and RDC tradeoffs for Bernoulli sources under Hamming distortion
  • Extends framework to perception divergences yielding closed-form DRPC functions
  • Develops deep end-to-end compression models for super-resolution, denoising, and inpainting
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Optimal transport
  • Rate-distortion theory
  • Adversarial distribution alignment
  • Deep compression networks
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • MNIST
  • SVHN
  • CIFAR-10
  • ImageNet
  • KODAK
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • Lossy Compression
  • Image Compression
  • Image Restoration
  • Image Inpainting
  • Optimal Transport
  • Multi-task Learning
  • Rate-Distortion-Perception Tradeoff
  • Rate-Distortion-Classification Tradeoff
  • Deep Learning
  • Unsupervised Learning

Related orals

Generative Human Geometry Distribution

Introduces distribution-over-distribution model combining geometry distributions with two-stage flow matching for human 3D generation.

Avg rating: 5.50 (2–8) · Xiangjun Tang et al.
Something off? Let us know →