ICLR 2026 Orals

Reasoning with Sampling: Your Base Model is Smarter Than You Think

Aayush Karan, Yilun Du

LLMs & Reasoning Thu, Apr 23 · 4:03 PM–4:13 PM · Amphitheater Avg rating: 7.50 (6–8)
Author-provided TL;DR

We find a training-free sampling algorithm that achieves reasoning boosts on base models comparable to those obtained by RL techniques.

Abstract

Frontier reasoning models have exhibited incredible capabilities across a wide array of disciplines, driven by posttraining large language models (LLMs) with reinforcement learning (RL). However, despite the widespread success of this paradigm, much of the literature has been devoted to disentangling truly novel behaviors that emerge during RL but are not present in the base models. In our work, we approach this question from a different angle, instead asking whether comparable reasoning capabilities can be elicited from base models at inference time by pure sampling, without any additional training. Inspired by Markov chain Monte Carlo (MCMC) techniques for sampling from sharpened distributions, we propose a simple iterative sampling algorithm leveraging the base models' own likelihoods. Over different base models, we show that our algorithm offers substantial boosts in reasoning that nearly match and even outperform those from RL on a wide variety of single-shot tasks, including MATH500, HumanEval, and GPQA. Moreover, our sampler avoids the collapse in diversity over multiple samples that is characteristic of RL-posttraining. Crucially, our method does not require training, curated datasets, or a verifier, suggesting broad applicability beyond easily verifiable domains.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Power sampling algorithm elicits strong reasoning from base models at inference time via MCMC without additional training.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Algorithm samples from base models without training using power distribution target
  • Motivated by Markov chain Monte Carlo techniques applied to autoregressive generation
  • Achieves single-shot reasoning performance on par with state-of-the-art RL-posttraining
  • Avoids diversity collapse characteristic of RL-posttraining while maintaining verifier-free applicability
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • MCMC sampling
  • Power distribution sampling
  • Iterative sampling
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • MATH500
  • HumanEval
  • GPQA
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • LLMs
  • reasoning
  • MCMC
  • sampling
  • inference-time compute

Related orals

Something off? Let us know →