ICLR 2026 Orals

Mixture-of-Experts Can Surpass Dense LLMs Under Strictly Equal Resource

Houyi Li, Ka Man Lo, Shijie Xuyang, Ziqi Wang, Wenzhen Zheng, Haocheng Zhang, Zhao Li, Shuigeng Zhou, Xiangyu Zhang, Daxin Jiang

LLMs & Reasoning Sat, Apr 25 · 3:51 PM–4:01 PM · 203 A/B Avg rating: 5.00 (4–8)

Abstract

Mixture-of-Experts (MoE) language models dramatically expand model capacity and achieve remarkable performance without increasing per-token compute. However, can MoEs surpass dense architectures under strictly equal resource constraints — that is, when the total parameter count, training compute, and data budget are identical? This question remains under-explored despite its significant practical value and potential. In this paper, we propose a novel perspective and methodological framework to study this question thoroughly. First, we comprehensively investigate the architecture of MoEs and achieve an optimal model design that maximizes the performance. Based on this, we subsequently find that an MoE model with activation rate in an optimal region is able to outperform its dense counterpart under the same total parameter, training compute and data resource. More importantly, this optimal region remains consistent across different model sizes. Although additional amount of data turns out to be a trade-off for enhanced performance, we show that this can be resolved via reusing data. We validate our findings through extensive experiments, training nearly 200 language models at 2B scale and over 50 at 7B scale, cumulatively processing 50 trillion tokens. All code and models will be released publicly.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

MoEs with optimal activation rates surpass dense LLMs under equal resource constraints (parameters, compute, data) with data reuse strategy.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Comprehensively investigates MoE architecture achieving optimal model design maximizing performance
  • Identifies optimal activation rate region consistently enabling MoE to outperform dense counterparts under same total parameter and compute
  • Demonstrates optimal activation region remains consistent across different model sizes and can be achieved with data reuse
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Mixture-of-Experts
  • Architecture optimization
  • Data reuse
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Limited to models no larger than 7B due to high computational cost
    from the paper
  • Focused mainly on impact of several main MoE components while fixing others to narrow experiment scale
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Explore optimal activation rates' impact on model capabilities
    from the paper
  • Investigate whether similar conclusions hold for other training methods like upcycling and MoEfication
    from the paper

Author keywords

  • Large language models (LLM)
  • Pre-training
  • Mixture-of-Experts (MoE)

Related orals

Something off? Let us know →