ICLR 2026 Orals

On The Surprising Effectiveness of a Single Global Merging in Decentralized Learning

Tongtian Zhu, Tianyu Zhang, Mingze Wang, Zhanpeng Zhou, Can Wang

Theory & Optimization Thu, Apr 23 · 10:42 AM–10:52 AM · 204 A/B Avg rating: 7.50 (6–8)
Author-provided TL;DR

We discover and theoretically explain why and when a single global parameter merging in decentralized learning can recover the performance of federated learning, even in highly heterogeneous and communication-constrained environments.

Abstract

Decentralized learning provides a scalable alternative to parameter-server-based training, yet its performance is often hindered by limited peer-to-peer communication. In this paper, we study how communication should be scheduled over time, including determining when and how frequently devices synchronize. Counterintuitive empirical results show that concentrating communication budgets in the later stages of decentralized training remarkably improves global test performance. Surprisingly, we uncover that fully connected communication at the final step, implemented by a single global merging, can significantly improve the performance of decentralized learning under high data heterogeneity. Our theoretical contributions, which explain these phenomena, are the first to establish that the globally merged model of decentralized SGD can match the convergence rate of parallel SGD. Technically, we reinterpret part of the discrepancy among local models, which were previously considered as detrimental noise, as constructive components essential for matching this rate. This work provides evidence that decentralized learning is able to generalize under high data heterogeneity and limited communication, while offering broad new avenues for model merging research.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Shows decentralized learning with single global merging achieves convergence rates matching parallel SGD under data heterogeneity.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Demonstrates a single global merging at final step improves decentralized learning performance
  • First result proving globally merged model matches parallel SGD convergence rate
  • Reinterprets discrepancy among local models as constructive components rather than noise
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Decentralized SGD
  • Model merging
  • Global consensus
  • Parameter aggregation
  • AdamW
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • TinyImageNet
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Empirical findings restricted to visual tasks, consistent with most existing decentralized learning literature
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Extending experimental setup to broader tasks beyond vision domain is a meaningful direction
    from the paper

Author keywords

  • Decentralized Learning
  • Model Merging
  • Training Dynamics
  • Distributed Training

Related orals

Something off? Let us know →