Task-free Adaptive Meta Black-box Optimization
Chao Wang, Licheng Jiao, Lingling Li, Jiaxuan Zhao, Guanchun Wang, Fang Liu, Shuyuan Yang
Abstract
Handcrafted optimizers become prohibitively inefficient for complex black-box optimization (BBO) tasks. MetaBBO addresses this challenge by meta-learning to automatically configure optimizers for low-level BBO tasks, thereby eliminating heuristic dependencies. However, existing methods typically require extensive handcrafted training tasks to learn meta-strategies that generalize to target tasks, which poses a critical limitation for realistic applications with unknown task distributions. To overcome the issue, we propose the Adaptive meta Black-box Optimization Model (ABOM), which performs online parameter adaptation using solely optimization data from the target task, obviating the need for predefined task distributions. Unlike conventional metaBBO frameworks that decouple meta-training and optimization phases, ABOM introduces a closed-loop adaptive parameter learning mechanism, where parameterized evolutionary operators continuously self-update by leveraging generated populations during optimization. This paradigm shift enables zero-shot optimization: ABOM achieves competitive performance on synthetic BBO benchmarks and realistic unmanned aerial vehicle path planning problems without any handcrafted training tasks. Visualization studies reveal that parameterized evolutionary operators exhibit statistically significant search patterns, including natural selection and genetic recombination.
ABOM performs task-free adaptive meta black-box optimization using online parameter adaptation without predefined task distributions.
- Introduces closed-loop adaptive parameter learning mechanism where evolutionary operators continuously self-update during optimization
- Eliminates dependency on handcrafted training tasks by performing online adaptation using only target task optimization data
- Enables zero-shot optimization through parameterized evolutionary operators updated via gradient descent
- Meta black-box optimization
- Evolutionary algorithms
- Online parameter adaptation
- Synthetic benchmarks
- UAV path planning
Cubic computational bottleneck O(d3) from attention mechanisms
from the paperLimited exploration of hybrid training paradigms integrating pretraining with online adaptation
from the paperConvergence rate analysis needed for theoretical examination of adaptive parameter learning
from the paper
Address cubic computational bottleneck through sparse or low-rank attention mechanisms
from the paperDynamically adapt population size and model capacity during optimization
from the paperConduct convergence rate analysis for adaptive parameter learning
from the paperExplore hybrid training paradigms integrating pretraining with online adaptation
from the paper
Author keywords
- Meta Black-box Optimization
- Evolutionary Algorithms
Related orals
Mastering Sparse CUDA Generation through Pretrained Models and Deep Reinforcement Learning
SparseRL leverages deep RL and pretrained models to generate high-performance CUDA code for sparse matrix operations.
Overthinking Reduction with Decoupled Rewards and Curriculum Data Scheduling
DECS framework reduces reasoning model overthinking by decoupling necessary from redundant tokens via curriculum scheduling.
MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent
MemAgent uses RL-trained memory modules to enable LLMs to extrapolate from 8K to 3.5M token contexts with minimal performance degradation.
DiffusionNFT: Online Diffusion Reinforcement with Forward Process
DiffusionNFT enables efficient online reinforcement learning for diffusion models via forward process optimization with up to 25x efficiency gains.
Hyperparameter Trajectory Inference with Conditional Lagrangian Optimal Transport
Hyperparameter Trajectory Inference uses conditional Lagrangian optimal transport to reconstruct neural network outputs across hyperparameter spectra without expensive retraining.