ICLR 2026 Orals

Addressing divergent representations from causal interventions on neural networks

Satchel Grant, Simon Jerome Han, Alexa R. Tartaglini, Christopher Potts

Interpretability & Mechanistic Understanding Sat, Apr 25 · 11:30 AM–11:40 AM · 203 A/B Avg rating: 5.20 (4–8)
Author-provided TL;DR

We show empirical representational divergence between native and causally intervened latent states, we show that this can be pernicious and propose a solution.

Abstract

A common approach to mechanistic interpretability is to causally manipulate model representations via targeted interventions in order to understand what those representations encode. Here we ask whether such interventions create out-of-distribution (divergent) representations, and whether this raises concerns about how faithful their resulting explanations are to the target model in its natural state. First, we demonstrate theoretically and empirically that common causal intervention techniques often do shift internal representations away from the natural distribution of the target model. Then, we provide a theoretical analysis of two cases of such divergences: "harmless" divergences that occur in the behavioral null-space of the layer(s) of interest, and "pernicious" divergences that activate hidden network pathways and cause dormant behavioral changes. Finally, in an effort to mitigate the pernicious cases, we apply and modify the Counterfactual Latent (CL) loss from Grant (2025) allowing representations from causal interventions to remain closer to the natural distribution, reducing the likelihood of harmful divergences while preserving the interpretive power of the interventions. Together, these results highlight a path towards more reliable interpretability methods.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

Study of causal interventions showing they produce out-of-distribution representations, proposing Counterfactual Latent loss to mitigate harmful divergences.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • Theoretical and empirical demonstration that common causal interventions shift representations away from natural distribution
  • Classification of divergences into harmless null-space effects and pernicious pathway activation cases
  • Modified Counterfactual Latent loss keeping intervened representations closer to natural distribution
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • causal interventions
  • mechanistic interpretability
  • representation divergence analysis
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • No principled method for classifying harmful divergence for any causal claim
    from the paper
  • Modified CL loss confined to narrow set of simplistic settings, not specific to pernicious divergence
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Explore ways to classify and mitigate pernicious divergence through self-supervised means
    from the paper

Author keywords

  • activation patching
  • mech interp
  • DAS
  • representational divergence
  • faithfulness

Related orals

Something off? Let us know →