ICLR 2026 Orals

Huxley-G\"odel Machine: Human-Level Coding Agent Development by an Approximation of the Optimal Self-Improving Machine

Wenyi Wang, Piotr Piękos, Li Nanbo, Firas Laakom, Yimeng Chen, Mateusz Ostaszewski, Mingchen Zhuge, Jürgen Schmidhuber

LLMs & Reasoning Thu, Apr 23 · 10:54 AM–11:04 AM · 202 A/B Avg rating: 6.00 (4–8)
Author-provided TL;DR

We propose Huxley-G\"odel Machine, an algorithm guideing self-improvements following an estimation of the value function of G\"odel Machines.

Abstract

Recent studies operationalize self-improvement through coding agents that edit their own codebases. They grow a tree of self-modifications through expansion strategies that favor higher software engineering benchmark performance, assuming that this implies more promising subsequent self-modifications. However, we identify a mismatch between the agent’s self-improvement potential (metaproductivity) and its coding benchmark performance, namely the Metaproductivity-Performance~Mismatch. Inspired by Huxley’s concept of clade, we propose a metric ($\mathrm{CMP}$) that aggregates the benchmark performances of the descendants of an agent as an indicator of its potential for self-improvement. We show that, in our self-improving coding agent development setting, access to the true CMP is sufficient to simulate how the Gödel Machine would behave under certain assumptions. We introduce the Huxley-G\"odel Machine (HGM), which, by estimating $\mathrm{CMP}$ and using it as guidance, searches the tree of self-modifications. On SWE-bench Verified and Polyglot, HGM outperforms prior self-improving coding agent development methods while using fewer allocated CPU hours. Last but not least, HGM demonstrates strong transfer to other coding datasets and LLMs. %large language models. The agent optimized by HGM on SWE-bench Verified with GPT-5 mini and evaluated on SWE-bench Lite with GPT-5 achieves human-level performance, matching the best officially checked results of human-engineered coding agents. Our code is publicly available at https://github.com/metauto-ai/HGM.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

HGM identifies metaproductivity-performance mismatch and uses clade-based lineage metrics to guide self-improving coding agents.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Introduces Clade-Metaproductivity (CMP) metric capturing long-term self-improvement potential via lineage-based evaluation
  • Huxley-Gödel Machine framework that estimates CMP and uses Thompson sampling with adaptive scheduling for agent search
  • Achieves human-level coding performance on SWE-bench Lite with better CPU efficiency than baseline methods
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Reinforcement learning on code modification
  • Thompson sampling
  • Lineage-based metrics
  • Tree search with adaptive scheduling
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • SWE-bench Verified
  • SWE-bench Lite
  • Polyglot
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Currently focuses on symbolic self-improvement (editing scaffolds, prompts, control logic) rather than weight-level modifications
    from the paper
  • Requires full sandboxing and controlled experimental setup
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Extending framework to weight-level self-modification through direct manipulation of neural network parameters
    from the paper

Author keywords

  • Self-Improvement
  • Coding Agents
  • G\"odel Machine

Related orals

Something off? Let us know →