Reliable Weak-to-Strong Monitoring of LLM Agents
Neil Kale, Chen Bo Calvin Zhang, Kevin Zhu, Ankit Aich, Paula Rodriguez, Christina Q Knight, Zifan Wang
This paper introduces a monitor red teaming workflow to stress test systems for detecting covert misbehavior in LLM agents, finding that a well-designed monitor scaffold enables weaker models to oversee strong aware attackers.
Abstract
We stress test monitoring systems for detecting covert misbehavior in LLM agents (e.g., secretly exfiltrating data). We propose a monitor red teaming (MRT) workflow that varies agent and monitor awareness, adversarial evasion strategies, and evaluation across tool-calling (SHADE-Arena) and computer-use (CUA-SHADE-Arena) environments. We benchmark standard monitor scaffoldings and introduce a hybrid hierarchical--sequential design. Our experiments yield three findings. First, agent awareness dominates monitor awareness: agents that know they are monitored substantially degrade detection, while increasing monitor awareness helps less than expected. Second, monitor scaffolding matters: our hybrid design consistently outperforms baselines and enables weaker monitors to oversee stronger agents (a weak-to-strong effect). Third, targeted human oversight is key: escalating only pre-flagged cases improves TPR by 15% at FPR=0.01. Our work positions MRT as a standard workflow for stress-testing oversight, revealing robustness gaps in both LLM- and human-based monitoring. We release code, data, and logs to support further research.
MetamerGen generates scene metamers aligned with human perception using foveal/peripheral features and latent diffusion.
- Dual-stream DINOv2 representation fusing detailed foveal features with peripherally degraded features for scene context
- Demonstrates sparse conditioning from gist and fixations sufficient for human-aligned scene generation
- Behavioral validation identifying metamers through same-different experiments with participants
- Latent diffusion models
- DINOv2 embeddings
- Foveated vision modeling
- Scene understanding
Inherits limitations from pre-trained Stable Diffusion including difficulties with fine-grained facial details and limb articulations
from the paperText generation often unreadable even when directly fixated
from the paper
Authors did not state explicit future directions.
Author keywords
- Agent Safety
- Chain-of-Thought Monitoring
- Large Language Model
Related orals
LLM Fingerprinting via Semantically Conditioned Watermarks
Introduces semantically conditioned watermarks for robust and stealthy LLM fingerprinting robust to deployment scenarios.
Steering the Herd: A Framework for LLM-based Control of Social Learning
Framework studying strategic control of social learning by algorithmic information mediators with theoretical analysis and LLM-based simulations.
Every Language Model Has a Forgery-Resistant Signature
Ellipse signatures function as forgery-resistant model output identifiers based on high-dimensional geometric constraints.
Gaussian certified unlearning in high dimensions: A hypothesis testing approach
Analyzes machine unlearning in high dimensions showing single noisy Newton step with Gaussian noise suffices for privacy-accuracy.
Differentially Private Domain Discovery
WGM-based methods provide efficient domain discovery with near-optimal guarantees for missing mass on Zipfian data.