Token-Importance Guided Direct Preference Optimization
Ning Yang, Hai Lin, Yibo Liu, Baoliang Tian, Guoqing Liu, Haijun Zhang
We proposes Token-Importance Guided Direct Preference Optimization (TI-DPO) to better align LLMs with human preferences by using a hybrid weighting mechanism to identify key tokens and a triplet loss to guide the optimization process.
Abstract
Aligning Large Language Models (LLMs) with human preferences is crucial for safe and effective AI interactions. While popular methods like Direct Preference Optimization (DPO) have simplified alignment, they remain sensitive to data noise and overlook the differential importance of individual tokens. Existing token-level approaches often rely on probability prediction or simplistic weighting schemes to obtain token importance, which still cannot fully address these issues. To solve this problem, we propose the Token-Importance Guided Direct Preference Optimization (TI-DPO), a framework that achieves fine-grained semantic control through two synergistic innovations. First, we propose a novel hybrid weighting mechanism that combines gradient attribution with a Gaussian prior, ensuring both the accuracy and robustness of token importance scores. Second, we employ a triplet loss to provide structured guidance for the optimization, explicitly guiding model outputs to approach preferred responses and diverge from non-preferred ones. Experimental results show that TI-DPO achieves higher accuracy and stronger generative diversity, providing more stable and computationally efficient solutions compared with DPO and other RLHF methods.
Proposes token-importance guided DPO with gradient attribution weighting and triplet loss for fine-grained LLM alignment.
- Proposes hybrid weighting mechanism combining gradient attribution with Gaussian prior for token importance
- Employs triplet loss for structured guidance to approach preferred and diverge from non-preferred responses
- Achieves higher accuracy and stronger generative diversity compared to DPO and RLHF methods
- Provides computational efficiency and stability improvements
- Direct Preference Optimization
- Gradient attribution
- Triplet loss
- IFEval
- TruthfulQA
- HumanEval
- MMLU
- GPQA
- GSM8K
TI-DPO introduces computational overhead requiring approximately 2x training time of standard DPO
from the paperPerformance gap compared to sequence-level baselines on knowledge-intensive and mathematical reasoning benchmarks
from the paperInherent risk of learning stereotypes or biases from preference data, though token weighting provides interpretability
from the paper
Integrate token-importance mechanism with group-based optimization methods like GRPO
from the paperEnhance reasoning capabilities on knowledge-intensive and mathematical tasks
from the paper
Author keywords
- LLMs
- RLHF
- DPO
- Human Preference Alignment
- Token-lmportance
- Triplet Loss
Related orals
Benchmarking Empirical Privacy Protection for Adaptations of Large Language Models
Benchmarks practical privacy risks in differential privacy-adapted LLMs, revealing distribution shifts and model choice impact effectiveness.
Half-order Fine-Tuning for Diffusion Model: A Recursive Likelihood Ratio Optimizer
Proposes Recursive Likelihood Ratio optimizer for efficient fine-tuning of diffusion models with lower variance gradient estimation.
Invisible Safety Threat: Malicious Finetuning for LLM via Steganography
Demonstrates LLMs can be finetuned to generate harmful steganographically-hidden outputs while appearing benign to safety systems.
Reducing Belief Deviation in Reinforcement Learning for Active Reasoning of LLM Agents
Proposes T3 algorithm to detect belief deviation in LLM agents and truncate trajectories for improved reinforcement learning in active reasoning tasks.
RefineStat: Efficient Exploration for Probabilistic Program Synthesis
RefineStat enforces semantic constraints and applies diagnostic-aware refinement for synthesizing valid probabilistic programs from smaller language models.