ICLR 2026 Orals

AutoEP: LLMs-Driven Automation of Hyperparameter Evolution for Metaheuristic Algorithms

Zhenxing Xu, Yizhe Zhang, Weidong Bao, Hao Wang, Ming Chen, Haoran Ye, Wenzheng Jiang, Hui Yan, Ji Wang

LLMs & Reasoning Sat, Apr 25 · 4:27 PM–4:37 PM · 202 A/B Avg rating: 6.50 (6–8)

Abstract

Dynamically configuring algorithm hyperparameters is a fundamental challenge in computational intelligence. While learning-based methods offer automation, they suffer from prohibitive sample complexity and poor generalization. We introduce AutoEP, a novel framework that bypasses training entirely by leveraging Large Language Models (LLMs) as zero-shot reasoning engines for algorithm control. AutoEP's core innovation lies in a tight synergy between two components: (1) an online Exploratory Landscape Analysis (ELA) module that provides real-time, quantitative feedback on the search dynamics, and (2) a multi-LLM reasoning chain that interprets this feedback to generate adaptive hyperparameter strategies. This approach grounds high-level reasoning in empirical data, mitigating hallucination. Evaluated on three distinct metaheuristics across diverse combinatorial optimization benchmarks, AutoEP consistently outperforms state-of-the-art tuners, including neural evolution and other LLM-based methods. Notably, our framework enables open-source models like Qwen3-30B to match the performance of GPT-4, demonstrating a powerful and accessible new paradigm for automated hyperparameter design.Our code is available at https://anonymous.4open.science/r/AutoEP-3E11.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

AutoEP uses LLM reasoning with real-time landscape analysis to dynamically control metaheuristic algorithms without training.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Introduces AutoEP framework leveraging LLMs as zero-shot reasoning engines for algorithm configuration
  • Synergizes Exploratory Landscape Analysis (ELA) module with multi-LLM reasoning chain for hyperparameter adaptation
  • Grounds high-level reasoning in empirical search data, mitigating hallucination
  • Demonstrates open-source Qwen3-30B matches GPT-4 performance on hyperparameter tuning
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Large language models
  • Exploratory landscape analysis
  • Zero-shot reasoning
  • Metaheuristic algorithms
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Combinatorial optimization benchmarks
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • LLMs
  • Optimization
  • Metaheuristic algorithm
  • Automatic Algorithm Design

Related orals

Something off? Let us know →