On The Surprising Effectiveness of a Single Global Merging in Decentralized Learning
Tongtian Zhu, Tianyu Zhang, Mingze Wang, Zhanpeng Zhou, Can Wang
We discover and theoretically explain why and when a single global parameter merging in decentralized learning can recover the performance of federated learning, even in highly heterogeneous and communication-constrained environments.
Abstract
Decentralized learning provides a scalable alternative to parameter-server-based training, yet its performance is often hindered by limited peer-to-peer communication. In this paper, we study how communication should be scheduled over time, including determining when and how frequently devices synchronize. Counterintuitive empirical results show that concentrating communication budgets in the later stages of decentralized training remarkably improves global test performance. Surprisingly, we uncover that fully connected communication at the final step, implemented by a single global merging, can significantly improve the performance of decentralized learning under high data heterogeneity. Our theoretical contributions, which explain these phenomena, are the first to establish that the globally merged model of decentralized SGD can match the convergence rate of parallel SGD. Technically, we reinterpret part of the discrepancy among local models, which were previously considered as detrimental noise, as constructive components essential for matching this rate. This work provides evidence that decentralized learning is able to generalize under high data heterogeneity and limited communication, while offering broad new avenues for model merging research.
Shows decentralized learning with single global merging achieves convergence rates matching parallel SGD under data heterogeneity.
- Demonstrates a single global merging at final step improves decentralized learning performance
- First result proving globally merged model matches parallel SGD convergence rate
- Reinterprets discrepancy among local models as constructive components rather than noise
- Decentralized SGD
- Model merging
- Global consensus
- Parameter aggregation
- AdamW
- TinyImageNet
Empirical findings restricted to visual tasks, consistent with most existing decentralized learning literature
from the paper
Extending experimental setup to broader tasks beyond vision domain is a meaningful direction
from the paper
Author keywords
- Decentralized Learning
- Model Merging
- Training Dynamics
- Distributed Training
Related orals
Non-Convex Federated Optimization under Cost-Aware Client Selection
Develops efficient federated optimization algorithm with cost-aware client selection achieving best communication and local complexity.
Fast Escape, Slow Convergence: Learning Dynamics of Phase Retrieval under Power-Law Data
Analyzes phase retrieval learning dynamics with anisotropic data, deriving explicit scaling laws and three-phase trajectories.
A Representer Theorem for Hawkes Processes via Penalized Least Squares Minimization
Representer theorem for Hawkes processes shows dual coefficients are analytically fixed to unity via penalized least squares.
Quantitative Bounds for Length Generalization in Transformers
Quantitative bounds show training length required for length generalization depends on periodicity, locality, alphabet size, and model norms.
Difficult Examples Hurt Unsupervised Contrastive Learning: A Theoretical Perspective
EBTs frame System 2 thinking as energy minimization enabling inference-time reasoning emergence across modalities.