ICLR 2026 Orals

LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning

Yuhao Wu, Yushi Bai, Zhiqiang Hu, Roy Ka-Wei Lee, Juanzi Li

LLMs & Reasoning Thu, Apr 23 · 3:15 PM–3:25 PM · Amphitheater Avg rating: 6.00 (4–8)

Abstract

Ultra-long generation by large language models (LLMs) is a widely demanded scenario, yet it remains a significant challenge due to their maximum generation length limit and overall quality degradation as sequence length increases. Previous approaches, exemplified by LongWriter, typically rely on ''teaching'', which involves supervised fine-tuning (SFT) on synthetic long-form outputs. However, this strategy heavily depends on synthetic SFT data, which is difficult and costly to construct, often lacks coherence and consistency, and tends to be overly artificial and structurally monotonous. In this work, we propose an incentivization-based approach that, starting entirely from scratch and without relying on any annotated or synthetic data, leverages reinforcement learning (RL) to foster the emergence of ultra-long, high-quality text generation capabilities in LLMs. We perform RL training starting from a base model, similar to R1-Zero, guiding it to engage in reasoning that facilitates planning and refinement during the writing process. To support this, we employ specialized reward models that steer the LLM towards improved length control, writing quality, and structural formatting. Experimental evaluations show that our LongWriter-Zero model, trained from Qwen2.5-32B, consistently outperforms traditional SFT methods on long-form writing tasks, achieving state-of-the-art results across all metrics on WritingBench and Arena-Write, and even surpassing 100B+ models such as DeepSeek R1 and Qwen3-235B.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

LongWriter-Zero applies RL from scratch to achieve ultra-long text generation without synthetic training data.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • First RL-based approach to ultra-long text generation without relying on synthetic or annotated datasets
  • Specialized reward models steering toward improved length control, writing quality, and structural formatting
  • Think Prompt incorporating explicit reasoning steps during RL to enhance planning and coherence
  • Continual pretraining substantially raises RL performance ceilings for long-form generation
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Reinforcement learning
  • Composite reward models
  • Reasoning-based prompting
  • Continual pretraining
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • WritingBench
  • Arena-Write
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Constrained by base model capabilities which affects diversity and innovation in writing styles
    from the paper
  • Maximum likelihood objective fails to provide explicit signals for global properties like coherence
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • LLMs
  • RL
  • Long-form generation

Related orals

Something off? Let us know →