ICLR 2026 Orals

How Reliable is Language Model Micro-Benchmarking?

Gregory Yauney, Shahzaib Saqib Warraich, Swabha Swayamdipta

LLMs & Reasoning Thu, Apr 23 · 3:51 PM–4:01 PM · 203 A/B Avg rating: 6.50 (4–8)

Abstract

Micro-benchmarking offers a solution to the often prohibitive time and cost of language model development: evaluate on a very small subset of existing benchmarks. Can these micro-benchmarks, however, rank models as consistently as the full benchmarks they replace? And can they rank models more consistently than selecting a random subset of data points? In many scenarios, we find that the answer is no. We introduce a meta-evaluation measure for micro-benchmarking which investigates how well a micro-benchmark can rank two models as a function of their performance difference on the full benchmark. This approach can determine which model pairs can be ranked correctly by a micro-benchmark, allowing for a finer-grained analysis of the trade-off between micro-benchmark size and reliability. Prior work has suggested selecting as few as 10 examples; we find that no micro-benchmarking method can consistently rank model pairs 3.5 points of accuracy apart on MMLU-Pro or 4 points apart on BIG-bench Hard. In order to consistently rank model pairs with relatively similar performances, we show that often as many as 250 examples must be selected, at which point random sampling is competitive with existing micro-benchmarking methods. When comparing only 8B instruction-tuned models on MMLU-Pro micro-benchmarks with 25 examples, we find that more than half of pairwise comparisons are not likely to be preserved. Our work provides actionable guidance for both micro-benchmark users and developers in navigating the trade-off between evaluation efficiency and reliability.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Work establishes meta-evaluation measures showing many micro-benchmarks cannot reliably rank similar-performing models.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Meta-evaluation measures of agreement and Minimum Detectable Ability Difference for micro-benchmarking
  • Shows no micro-benchmarking method can consistently rank models 3.5+ points apart on MMLU-Pro
  • Demonstrates often 250+ examples needed for reliable model pair ranking versus random sampling
  • Provides actionable guidance for micro-benchmark users and developers
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Meta-evaluation
  • Statistical analysis
  • Correlation measurement
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • MMLU-Pro
  • BIG-bench Hard
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Experiments primarily focus on accuracy-based evaluation
    from the paper
  • Extensions to other metrics like open-ended generation remain to be explored
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Use MDAD to directly guide data selection strategies
    from the paper

Author keywords

  • efficient evaluation
  • meta-evaluation
  • language models

Related orals

Something off? Let us know →