Addressing divergent representations from causal interventions on neural networks
Satchel Grant, Simon Jerome Han, Alexa R. Tartaglini, Christopher Potts
We show empirical representational divergence between native and causally intervened latent states, we show that this can be pernicious and propose a solution.
Abstract
A common approach to mechanistic interpretability is to causally manipulate model representations via targeted interventions in order to understand what those representations encode. Here we ask whether such interventions create out-of-distribution (divergent) representations, and whether this raises concerns about how faithful their resulting explanations are to the target model in its natural state. First, we demonstrate theoretically and empirically that common causal intervention techniques often do shift internal representations away from the natural distribution of the target model. Then, we provide a theoretical analysis of two cases of such divergences: "harmless" divergences that occur in the behavioral null-space of the layer(s) of interest, and "pernicious" divergences that activate hidden network pathways and cause dormant behavioral changes. Finally, in an effort to mitigate the pernicious cases, we apply and modify the Counterfactual Latent (CL) loss from Grant (2025) allowing representations from causal interventions to remain closer to the natural distribution, reducing the likelihood of harmful divergences while preserving the interpretive power of the interventions. Together, these results highlight a path towards more reliable interpretability methods.
Study of causal interventions showing they produce out-of-distribution representations, proposing Counterfactual Latent loss to mitigate harmful divergences.
- Theoretical and empirical demonstration that common causal interventions shift representations away from natural distribution
- Classification of divergences into harmless null-space effects and pernicious pathway activation cases
- Modified Counterfactual Latent loss keeping intervened representations closer to natural distribution
- causal interventions
- mechanistic interpretability
- representation divergence analysis
No principled method for classifying harmful divergence for any causal claim
from the paperModified CL loss confined to narrow set of simplistic settings, not specific to pernicious divergence
from the paper
Explore ways to classify and mitigate pernicious divergence through self-supervised means
from the paper
Author keywords
- activation patching
- mech interp
- DAS
- representational divergence
- faithfulness
Related orals
Verifying Chain-of-Thought Reasoning via Its Computational Graph
CRV uses attribution graphs as execution traces to verify chain-of-thought reasoning with white-box mechanistic analysis of computation failures.
Temporal Sparse Autoencoders: Leveraging the Sequential Nature of Language for Interpretability
Temporal Sparse Autoencoders incorporate contrastive loss encouraging consistent feature activations over adjacent tokens to discover semantic concepts.
Temporal superposition and feature geometry of RNNs under memory demands
Studies temporal superposition in RNNs showing how memory demands affect representational geometry and RNNs learn different encoding strategies.
Exploratory Causal Inference in SAEnce
Uses sparse autoencoders and foundation models to discover unknown causal effects in scientific trials.