ICLR 2026 Orals

What's In My Human Feedback? Learning Interpretable Descriptions of Preference Data

Rajiv Movva, Smitha Milli, Sewon Min, Emma Pierson

Safety, Privacy & Alignment Thu, Apr 23 · 4:15 PM–4:25 PM · 203 A/B Avg rating: 6.50 (4–8)
Author-provided TL;DR

We present WIMHF, a method to describe the preferences encoded by human feedback; produce insights from seven widely-used datasets; and show that the method enables new approaches to data curation and personalization.

Abstract

Human feedback can alter language models in unpredictable and undesirable ways, as practitioners lack a clear understanding of what feedback data encodes. While prior work studies preferences over certain attributes (e.g., length or sycophancy), automatically extracting relevant features without pre-specifying hypotheses remains challenging. We introduce *What's In My Human Feedback?* (WIMHF), a method to explain feedback data using sparse autoencoders. WIMHF characterizes both (1) the preferences a dataset is capable of measuring and (2) the preferences that the annotators actually express. Across 7 datasets, WIMHF identifies a small number of human-interpretable features that account for the majority of the preference prediction signal achieved by black-box models. These features reveal a wide diversity in what humans prefer, and the role of dataset-level context: for example, users on Reddit prefer informality and jokes, while annotators in HH-RLHF and PRISM disprefer them. WIMHF also surfaces potentially unsafe preferences, such as that LMArena users tend to vote against refusals, often in favor of toxic content. The learned features enable effective *data curation*: re-labeling the harmful examples in Arena yields large safety gains (+37%) with no cost to general performance. They also allow fine-grained *personalization*: on the Community Alignment dataset, we learn annotator-specific weights over subjective features that improve preference prediction. WIMHF provides a human-centered analysis method for practitioners to better understand and use preference data.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

WIMHF uses sparse autoencoders to extract human-interpretable features from preference data, enabling better understanding and curation of human feedback.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • WIMHF method to explain feedback data using sparse autoencoders
  • Identifies measurable preferences and preferences actually expressed by annotators
  • Enables effective data curation and fine-grained personalization of alignment
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Sparse autoencoders
  • Preference learning
  • Data curation
  • Interpretability
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • HH-RLHF
  • PRISM
  • Community Alignment
  • LMArena
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Using prompt-response embeddings does not consistently improve preference prediction over response-only embeddings
    from the paper
  • Gap between interpretable features and finetuned reward models
    from the paper
  • Removed non-English data limiting multilingual generalization
    from the paper
  • Personalization experiments limited to single dataset with sparse annotator-level data
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Develop conditional SAE or learn prompt-dependent feature weights for better prompt conditioning
    from the paper
  • Close gap between interpretable features and reward models while preserving interpretability
    from the paper
  • Extend to multilingual feedback data with language- and culture-specific preferences
    from the paper
  • Evaluate personalization on richer datasets with more annotations per user
    from the paper

Author keywords

  • rlhf
  • explaining datasets
  • interpretability
  • reward modeling
  • personalization

Related orals

Something off? Let us know →