ICLR 2026 Orals

Token-Importance Guided Direct Preference Optimization

Ning Yang, Hai Lin, Yibo Liu, Baoliang Tian, Guoqing Liu, Haijun Zhang

LLMs & Reasoning Thu, Apr 23 · 3:39 PM–3:49 PM · Amphitheater Avg rating: 6.50 (4–8)
Author-provided TL;DR

We proposes Token-Importance Guided Direct Preference Optimization (TI-DPO) to better align LLMs with human preferences by using a hybrid weighting mechanism to identify key tokens and a triplet loss to guide the optimization process.

Abstract

Aligning Large Language Models (LLMs) with human preferences is crucial for safe and effective AI interactions. While popular methods like Direct Preference Optimization (DPO) have simplified alignment, they remain sensitive to data noise and overlook the differential importance of individual tokens. Existing token-level approaches often rely on probability prediction or simplistic weighting schemes to obtain token importance, which still cannot fully address these issues. To solve this problem, we propose the Token-Importance Guided Direct Preference Optimization (TI-DPO), a framework that achieves fine-grained semantic control through two synergistic innovations. First, we propose a novel hybrid weighting mechanism that combines gradient attribution with a Gaussian prior, ensuring both the accuracy and robustness of token importance scores. Second, we employ a triplet loss to provide structured guidance for the optimization, explicitly guiding model outputs to approach preferred responses and diverge from non-preferred ones. Experimental results show that TI-DPO achieves higher accuracy and stronger generative diversity, providing more stable and computationally efficient solutions compared with DPO and other RLHF methods.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Proposes token-importance guided DPO with gradient attribution weighting and triplet loss for fine-grained LLM alignment.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Proposes hybrid weighting mechanism combining gradient attribution with Gaussian prior for token importance
  • Employs triplet loss for structured guidance to approach preferred and diverge from non-preferred responses
  • Achieves higher accuracy and stronger generative diversity compared to DPO and RLHF methods
  • Provides computational efficiency and stability improvements
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Direct Preference Optimization
  • Gradient attribution
  • Triplet loss
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • IFEval
  • TruthfulQA
  • HumanEval
  • MMLU
  • GPQA
  • GSM8K
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • TI-DPO introduces computational overhead requiring approximately 2x training time of standard DPO
    from the paper
  • Performance gap compared to sequence-level baselines on knowledge-intensive and mathematical reasoning benchmarks
    from the paper
  • Inherent risk of learning stereotypes or biases from preference data, though token weighting provides interpretability
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Integrate token-importance mechanism with group-based optimization methods like GRPO
    from the paper
  • Enhance reasoning capabilities on knowledge-intensive and mathematical tasks
    from the paper

Author keywords

  • LLMs
  • RLHF
  • DPO
  • Human Preference Alignment
  • Token-lmportance
  • Triplet Loss

Related orals

Something off? Let us know →