ICLR 2026 Orals

RAIN-Merging: A Gradient-Free Method to Enhance Instruction Following in Large Reasoning Models with Preserved Thinking Format

Zhehao Huang, Yuhang Liu, Baijiong Lin, Yixin Lou, Zhengbao He, Hanling Tian, Tao Li, Xiaolin Huang

LLMs & Reasoning Thu, Apr 23 · 11:30 AM–11:40 AM · Amphitheater Avg rating: 6.50 (4–8)

Abstract

Large reasoning models (LRMs) excel at a long chain of reasoning but often fail to faithfully follow instructions regarding output format, constraints, or specific requirements. We investigate whether this gap can be closed by integrating an instruction-tuned model (ITM) into an LRM. Analyzing their differences in parameter space, namely task vectors, we find that their principal subspaces are nearly orthogonal across key modules, suggesting a lightweight merging with minimal interference. However, we also demonstrate that naïve merges are fragile because they overlook the output format mismatch between LRMs (with explicit *thinking* and *response* segments) and ITMs (answers-only). We introduce **RAIN-Merging** (Reasoning-Aware Instruction-attention guided Null-space projection Merging), a gradient-free method that integrates instruction following while preserving thinking format and reasoning performance. First, with a small reasoning calibration set, we project the ITM task vector onto the null space of forward features at thinking special tokens, which preserves the LRM's structured reasoning mechanisms. Second, using a small instruction calibration set, we estimate instruction attention to derive module-specific scaling that amplifies instruction-relevant components and suppresses leakage. Across four instruction-following benchmarks and nine reasoning & general capability benchmarks, RAIN-Merging substantially improves instruction adherence while maintaining reasoning quality. The gains are consistent across model scales and architectures, translating to improved performance in agent settings.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Proposes RAIN-Merging to merge instruction-tuned and reasoning models while preserving structured thinking format.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Gradient-free method projecting instruction task vector onto null space of thinking format tokens
  • Module-specific scaling based on instruction attention to amplify relevant components
  • Preserves structured reasoning output while improving instruction following across benchmarks
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Model merging
  • Null-space projection
  • Task vector analysis
  • Instruction attention
  • KL constraint
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Method relies on R1-style templates and tokenization to extract thinking segments; fails if model hides reasoning or adopts different templates
    from the paper
  • Instruction and reasoning calibration sets are limited in size with noise from LLM-as-judge auto-annotation; distribution shifts across languages or task domains may affect generalization
    from the paper
  • KL constraint on thinking segment helps preserve reasoning format but non-thinking content and safety-relevant behaviors may drift with no formal safety guarantee
    from the paper
  • Experiments focus primarily on Qwen/DeepSeek families; applicability to multimodal LLMs, tool use, code-generation, and multilingual scenarios requires systematic evaluation
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • Large Reasoning Model
  • Instruction Following
  • Model Merging
  • Null-Space

Related orals

Something off? Let us know →