TROLL: Trust Regions Improve Reinforcement Learning for Large Language Models
Philipp Becker, Niklas Freymuth, Serge Thilges, Fabian Otto, Gerhard Neumann
Replacing PPO's clipping objective with more principled trust regions improves RL from verifiable rewards.
Abstract
Reinforcement Learning (RL) with PPO-like clip objectives has become the standard choice for reward-based fine-tuning of large language models (LLMs). Although recent work has explored improved estimators of advantages and normalization, the clipping mechanism itself has remained untouched. Originally introduced as a proxy for principled KL-based trust regions, clipping is a crude approximation that often causes unstable updates and suboptimal performance. We replace the clip objective with a novel discrete differentiable trust region projection, which provides principled token-level KL constraints. The projection operates on a sparse subset of the model’s most important token logits to balance computational cost and projection effectiveness. Our approach, Trust Region Optimization for Large Language Models (TROLL), serves as a direct replacement for PPO-like clipping during training and does not alter the model’s inference behavior. Across mathematical reasoning and code generation tasks, model families, as well as advantage-estimation methods, TROLL consistently outperforms PPO-like clipping in terms of training speed, stability, and final success rates.
TROLL replaces PPO clip objective with differentiable trust region projection for more stable and efficient LLM reward fine-tuning.
- Introduces TROLL, trust-region based policy gradient objective replacing PPO-clip mechanism
- Proposes novel discrete differentiable trust region projection providing token-level KL constraints
- Extends to sparse distributions focusing on important token logits for computational efficiency
- Consistently outperforms PPO-clip across models, tasks, and advantage-estimation methods
- Policy gradient methods
- Trust region optimization
- KL constraints
- Mathematical reasoning benchmarks
- Code generation benchmarks
Currently evaluates only on dense models up to 14B parameters
from the paper
Scale TROLL to larger models and Mixture-of-Experts architectures
from the paperExtend TROLL to other modalities such as vision-language models
from the paper
Author keywords
- RL from verifiable rewards
- Finetuning LLMs
- Trust Regions
Related orals
Mastering Sparse CUDA Generation through Pretrained Models and Deep Reinforcement Learning
SparseRL leverages deep RL and pretrained models to generate high-performance CUDA code for sparse matrix operations.
Overthinking Reduction with Decoupled Rewards and Curriculum Data Scheduling
DECS framework reduces reasoning model overthinking by decoupling necessary from redundant tokens via curriculum scheduling.
MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent
MemAgent uses RL-trained memory modules to enable LLMs to extrapolate from 8K to 3.5M token contexts with minimal performance degradation.
DiffusionNFT: Online Diffusion Reinforcement with Forward Process
DiffusionNFT enables efficient online reinforcement learning for diffusion models via forward process optimization with up to 25x efficiency gains.
Hyperparameter Trajectory Inference with Conditional Lagrangian Optimal Transport
Hyperparameter Trajectory Inference uses conditional Lagrangian optimal transport to reconstruct neural network outputs across hyperparameter spectra without expensive retraining.