ICLR 2026 Orals

Gaia2: Benchmarking LLM Agents on Dynamic and Asynchronous Environments

Romain Froger, Pierre Andrews, Matteo Bettini, Amar Budhiraja, Ricardo Silveira Cabral, Virginie Do, Emilien Garreau, Jean-Baptiste Gaya, Hugo Laurençon, Maxime Lecanu, Kunal Malkan, Dheeraj Mekala, Pierre Menard, Gerard Moreno-Torres Bertran, Ulyana Piterbarg, Mikhail Plekhanov, Mathieu Rita, Andrey Rusakov, Vladislav Vorotilov, Mengjue Wang, Ian Yu, Amine Benhalloum, Grégoire Mialon, Thomas Scialom

LLMs & Reasoning Fri, Apr 24 · 11:06 AM–11:16 AM · Amphitheater Avg rating: 8.00 (6–10)
Author-provided TL;DR

Gaia2 evaluates LLM agents in asynchronous, dynamic environments with action-level verification, revealing fundamental trade-offs between reasoning, speed, and robustness.

Abstract

We introduce **Gaia2**, a benchmark for evaluating large language model agents in realistic, asynchronous environments. Unlike prior static or synchronous evaluations, Gaia2 introduces scenarios where environments evolve independently of agent actions, requiring agents to operate under temporal constraints, adapt to noisy and dynamic events, resolve ambiguity, and collaborate with other agents. Each scenario is paired with a write-action verifier, enabling fine-grained, action-level evaluation and making Gaia2 directly usable for reinforcement learning from verifiable rewards. Our evaluation of state-of-the-art proprietary and open-source models shows that no model dominates across capabilities: GPT-5 (high) reaches the strongest overall score of 42% pass@1 but fails on time-sensitive tasks, Claude-4 Sonnet trades accuracy and speed for cost, Kimi-K2 leads among open-source models with 21% pass@1. These results highlight fundamental trade-offs between reasoning, efficiency, robustness, and expose challenges in closing the “sim2real” gap. Gaia2 is built on a consumer environment with the open-source **Agents Research Environments** platform and designed to be easy to extend. By releasing Gaia2 alongside the foundational ARE framework, we aim to provide the community with a flexible infrastructure for developing, benchmarking, and training the next generation of practical agent systems.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Gaia2 benchmarks LLM agents in asynchronous dynamic environments with action-level verification for RL training.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Benchmark for realistic asynchronous environments where environments evolve independently of agent actions
  • Write-action verifier enabling fine-grained action-level evaluation usable for RL from verifiable rewards
  • Reveals fundamental trade-offs between reasoning, efficiency, and robustness across different models
  • Asynchronous event-driven framework with action-level verification directly applicable to RLVR
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Agent benchmarking
  • Action-level verification
  • Asynchronous environment simulation
  • Mobile environment modeling
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • benchmark
  • agents
  • rlvr
  • multi-agent systems
  • reasoning
  • large language models
  • evaluation
  • framework

Related orals

Something off? Let us know →