ICLR 2026 Orals

ThinKV: Thought-Adaptive KV Cache Compression for Efficient Reasoning Models

Akshat Ramachandran, Marina Neseem, Charbel Sakr, Rangharajan Venkatesan, Brucek Khailany, Tushar Krishna

LLMs & Reasoning Fri, Apr 24 · 3:15 PM–3:25 PM · Amphitheater Avg rating: 6.00 (4–8)

Abstract

The long-output context generation of large reasoning models enables extended chain of thought (CoT) but also drives rapid growth of the key–value (KV) cache, quickly overwhelming GPU memory. To address this challenge, we propose ThinKV, a thought-adaptive KV cache compression framework. ThinKV is based on the observation that attention sparsity reveals distinct thought types with varying importance within the CoT. It applies a hybrid quantization–eviction strategy, assigning token precision by thought importance and progressively evicting tokens from less critical thoughts as reasoning trajectories evolve. Furthermore, to implement ThinKV, we design a kernel that extends PagedAttention to enable efficient reuse of evicted tokens' memory slots, eliminating compaction overheads. Extensive experiments on DeepSeek-R1-Distill, GPT-OSS, and NVIDIA AceReason across mathematics and coding benchmarks show that ThinKV achieves near-lossless accuracy with less than 5% of the original KV cache, while improving performance with up to 5.8x higher inference throughput over SoTA baselines.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Compresses KV cache in reasoning models via thought-adaptive quantization and eviction achieving near-lossless accuracy.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Proposes thought-adaptive strategy leveraging attention sparsity to identify distinct thought types
  • Combines hybrid quantization-eviction assigning token precision by thought importance
  • Designs kernel extending PagedAttention for efficient memory reuse of evicted tokens
  • Achieves near-lossless accuracy with <5% original KV cache with up to 5.8x throughput gains
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • KV cache compression
  • Quantization
  • Token eviction
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • AIME
  • LiveCodeBench
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Not directly applicable to settings dominated by long input contexts
    from the paper
  • Future LRMs with greater emphasis on long-input contexts may require additional exploration
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit future directions.

Author keywords

  • Large Reasoning Models
  • KV Cache Compression
  • Quantization
  • Eviction
  • Sparsity
  • Thought-Aware Compression

Related orals

Something off? Let us know →