What's In My Human Feedback? Learning Interpretable Descriptions of Preference Data
Rajiv Movva, Smitha Milli, Sewon Min, Emma Pierson
We present WIMHF, a method to describe the preferences encoded by human feedback; produce insights from seven widely-used datasets; and show that the method enables new approaches to data curation and personalization.
Abstract
Human feedback can alter language models in unpredictable and undesirable ways, as practitioners lack a clear understanding of what feedback data encodes. While prior work studies preferences over certain attributes (e.g., length or sycophancy), automatically extracting relevant features without pre-specifying hypotheses remains challenging. We introduce *What's In My Human Feedback?* (WIMHF), a method to explain feedback data using sparse autoencoders. WIMHF characterizes both (1) the preferences a dataset is capable of measuring and (2) the preferences that the annotators actually express. Across 7 datasets, WIMHF identifies a small number of human-interpretable features that account for the majority of the preference prediction signal achieved by black-box models. These features reveal a wide diversity in what humans prefer, and the role of dataset-level context: for example, users on Reddit prefer informality and jokes, while annotators in HH-RLHF and PRISM disprefer them. WIMHF also surfaces potentially unsafe preferences, such as that LMArena users tend to vote against refusals, often in favor of toxic content. The learned features enable effective *data curation*: re-labeling the harmful examples in Arena yields large safety gains (+37%) with no cost to general performance. They also allow fine-grained *personalization*: on the Community Alignment dataset, we learn annotator-specific weights over subjective features that improve preference prediction. WIMHF provides a human-centered analysis method for practitioners to better understand and use preference data.
WIMHF uses sparse autoencoders to extract human-interpretable features from preference data, enabling better understanding and curation of human feedback.
- WIMHF method to explain feedback data using sparse autoencoders
- Identifies measurable preferences and preferences actually expressed by annotators
- Enables effective data curation and fine-grained personalization of alignment
- Sparse autoencoders
- Preference learning
- Data curation
- Interpretability
- HH-RLHF
- PRISM
- Community Alignment
- LMArena
Using prompt-response embeddings does not consistently improve preference prediction over response-only embeddings
from the paperGap between interpretable features and finetuned reward models
from the paperRemoved non-English data limiting multilingual generalization
from the paperPersonalization experiments limited to single dataset with sparse annotator-level data
from the paper
Develop conditional SAE or learn prompt-dependent feature weights for better prompt conditioning
from the paperClose gap between interpretable features and reward models while preserving interpretability
from the paperExtend to multilingual feedback data with language- and culture-specific preferences
from the paperEvaluate personalization on richer datasets with more annotations per user
from the paper
Author keywords
- rlhf
- explaining datasets
- interpretability
- reward modeling
- personalization
Related orals
LLM Fingerprinting via Semantically Conditioned Watermarks
Introduces semantically conditioned watermarks for robust and stealthy LLM fingerprinting robust to deployment scenarios.
Steering the Herd: A Framework for LLM-based Control of Social Learning
Framework studying strategic control of social learning by algorithmic information mediators with theoretical analysis and LLM-based simulations.
Every Language Model Has a Forgery-Resistant Signature
Ellipse signatures function as forgery-resistant model output identifiers based on high-dimensional geometric constraints.
Gaussian certified unlearning in high dimensions: A hypothesis testing approach
Analyzes machine unlearning in high dimensions showing single noisy Newton step with Gaussian noise suffices for privacy-accuracy.
Differentially Private Domain Discovery
WGM-based methods provide efficient domain discovery with near-optimal guarantees for missing mass on Zipfian data.