ICLR 2026 Orals

GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

Lakshya A Agrawal, Shangyin Tan, Dilara Soylu, Noah Ziems, Rishi Khare, Krista Opsahl-Ong, Arnav Singhvi, Herumb Shandilya, Michael J Ryan, Meng Jiang, Christopher Potts, Koushik Sen, Alex Dimakis, Ion Stoica, Dan Klein, Matei Zaharia, Omar Khattab

LLMs & Reasoning Fri, Apr 24 · 11:30 AM–11:40 AM · Amphitheater Avg rating: 6.00 (2–10)
Author-provided TL;DR

GEPA uses natural language reflection to optimize prompts, outperforming GRPO and MIPROv2 while needing far fewer rollouts.

Abstract

Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language often provides a much richer learning medium for LLMs, compared to policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across six tasks, GEPA outperforms GRPO by 6 percentage points on average and by up to 19pp, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10 percentage points (e.g., +12pp on AIME-2025), and demonstrates promising results as an inference-time search strategy for code optimization. We release our code at https://github.com/gepa-ai/gepa.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

GEPA uses genetic-Pareto selection with natural language reflection to outperform RL-based prompt optimization with 35x fewer rollouts.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Introduces GEPA, prompt optimizer leveraging natural language reflection to learn from trial and error
  • Samples trajectories and reflects to diagnose problems, propose and test prompt updates from Pareto frontier
  • Outperforms GRPO by 6 percentage points average, up to 19pp, using up to 35x fewer rollouts
  • Exceeds leading prompt optimizer MIPROv2 by over 10 percentage points on benchmarks including AIME-2025
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Prompt optimization
  • Genetic algorithms
  • Pareto optimization
  • Natural language reflection
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • AIME-2025
  • Various reasoning benchmarks
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • prompt optimization
  • natural language
  • reflection
  • large language models
  • agent design
  • agent discovery
  • code optimization
  • compound AI systems
  • genetic
  • language based learning
  • evolutionary algorithms

Related orals

Something off? Let us know →