Why DPO is a Misspecified Estimator and How to Fix It
Aditya Gopalan, Sayak Ray Chowdhury, Debangshu Banerjee
DPO is not sound by design and can fail due to misspecification, we fix it with careful analysis.
Abstract
Direct alignment algorithms such as Direct Preference Optimization (DPO) fine-tune models based on preference data, using only supervised learning instead of two-stage reinforcement learning with human feedback (RLHF). We show that DPO encodes a statistical estimation problem over reward functions induced by a parametric policy class. When the true reward function that generates preferences cannot be realized via the policy class, DPO becomes misspecified, resulting in failure modes such as preference order reversal, worsening of policy reward, and high sensitivity to the input preference data distribution. On the other hand, we study the local behavior of two-stage RLHF for a parametric class and relate it to a natural gradient step in policy space. Our fine-grained geometric characterization allows us to propose AuxDPO, which introduces additional auxiliary variables in the DPO loss function to help move towards the RLHF solution in a principled manner and mitigate the misspecification in DPO. We empirically demonstrate the superior performance of AuxDPO on didactic bandit settings as well as LLM alignment tasks.
AuxDPO introduces auxiliary variables mitigating DPO misspecification and moving toward RLHF solutions.
- Characterization of DPO as statistical estimation problem over reward functions
- Identifies failure modes when true reward function cannot be realized by policy class
- Geometric characterization of two-stage RLHF relating to natural gradient in policy space
- AuxDPO with auxiliary variables achieving superior performance over DPO variants
- Direct preference optimization
- RLHF
- Auxiliary variable learning
- Reward function estimation
- MMLU-Pro
- REWARDBENCH V2
- ULTRAFEEDBACK
Authors did not state explicit limitations.
Authors did not state explicit future directions.
Author keywords
- Direct Preference Optimization
- Reinforcement Learning
- Reinforcement learning with human feedback
Related orals
Benchmarking Empirical Privacy Protection for Adaptations of Large Language Models
Benchmarks practical privacy risks in differential privacy-adapted LLMs, revealing distribution shifts and model choice impact effectiveness.
Half-order Fine-Tuning for Diffusion Model: A Recursive Likelihood Ratio Optimizer
Proposes Recursive Likelihood Ratio optimizer for efficient fine-tuning of diffusion models with lower variance gradient estimation.
Invisible Safety Threat: Malicious Finetuning for LLM via Steganography
Demonstrates LLMs can be finetuned to generate harmful steganographically-hidden outputs while appearing benign to safety systems.
Reducing Belief Deviation in Reinforcement Learning for Active Reasoning of LLM Agents
Proposes T3 algorithm to detect belief deviation in LLM agents and truncate trajectories for improved reinforcement learning in active reasoning tasks.
RefineStat: Efficient Exploration for Probabilistic Program Synthesis
RefineStat enforces semantic constraints and applies diagnostic-aware refinement for synthesizing valid probabilistic programs from smaller language models.