ICLR 2026 Orals

PateGAIL++: Utility Optimized Private Trajectory Generation with Imitation Learning

Yingjie Ma, Bijal Bharadva, Xin Zhang, Joann Qiongna Chen

Safety, Privacy & Alignment Sat, Apr 25 · 3:39 PM–3:49 PM · 201 A/B Avg rating: 5.00 (4–6)

Abstract

Human mobility trajectory data supports a wide range of applications, including urban planning, intelligent transportation systems, and public safety monitoring. However, large-scale, high-quality mobility datasets are difficult to obtain due to privacy concerns. Raw trajectory data may reveal sensitive user information, such as home addresses, routines, or social relationships, making it crucial to develop privacy-preserving alternatives. Recent advances in deep generative modeling have enabled synthetic trajectory generation, but existing methods either lack formal privacy guarantees or suffer from reduced utility and scalability. Differential Privacy (DP) has emerged as a rigorous framework for data protection, and recent efforts such as PATE-GAN and \textsc{PateGail} integrate DP with generative adversarial learning. While promising, these methods struggle to generalize across diverse trajectory patterns and often incur significant utility degradation. In this work, we propose a new framework that builds on \textsc{PateGail\texttt{++}} by introducing a \emph{sensitivity-aware noise injection module} that dynamically adjusts privacy noise based on sample-level sensitivity. This design significantly improves trajectory fidelity, downstream task performance, and scalability under strong privacy guarantees. We further adapt our framework to the local differential privacy (LDP) setting, allowing individual-level protection without reliance on a trusted server. We evaluate our method on a real-world mobility dataset and demonstrate its superiority over state-of-the-art baselines in terms of privacy-utility trade-off.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

PATEGAIL++ privacy-preserving trajectory generation framework using sensitivity-aware noise allocation for improved privacy-utility trade-off.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Sensitivity-aware noise injection module dynamically adjusting privacy noise based on sample-level sensitivity
  • Extension to local differential privacy setting enabling individual-level protection without trusted server
  • Improved trajectory fidelity, downstream task performance, and scalability under strong privacy guarantees
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • differential privacy
  • generative adversarial learning
  • WGAN-GP
  • sensitivity analysis
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • real-world mobility dataset
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Sensitivity measure relies on discriminator confidence at state-action level not capturing semantic privacy risks
    from the paper
  • Does not fully address repeated visits to sensitive locations or long-horizon sequence patterns
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Develop more nuanced sensitivity models incorporating location-specific and long-term risk signals
    from the paper

Author keywords

  • Differential Privacy
  • Imitation Learning

Related orals

Something off? Let us know →