ICLR 2026 Orals

CounselBench: A Large-Scale Expert Evaluation and Adversarial Benchmarking of Large Language Models in Mental Health Question Answering

Yahan Li, Jifan Yao, John Bosco S. Bunyi, Adam C Frank, Angel Hsing-Chi Hwang, Ruishan Liu

LLMs & Reasoning Sat, Apr 25 · 4:03 PM–4:13 PM · 204 A/B Avg rating: 6.67 (6–8)

Abstract

Medical question answering (QA) benchmarks often focus on multiple-choice or fact-based tasks, leaving open-ended answers to real patient questions underexplored. This gap is particularly critical in mental health, where patient questions often mix symptoms, treatment concerns, and emotional needs, requiring answers that balance clinical caution with contextual sensitivity. We present CounselBench, a large-scale benchmark developed with 100 mental health professionals to evaluate and stress-test large language models (LLMs) in realistic help-seeking scenarios. The first component, CounselBench-EVAL, contains 2,000 expert evaluations of answers from GPT-4, LLaMA 3, Gemini, and online human therapists on patient questions from the public forum CounselChat. Each answer is rated across six clinically grounded dimensions, with span-level annotations and written rationales. Expert evaluations show that while LLMs achieve high scores on several dimensions, they also exhibit recurring issues, including unconstructive feedback, overgeneralization, and limited personalization or relevance. Responses were frequently flagged for safety risks, most notably unauthorized medical advice. Follow-up experiments show that LLM judges systematically overrate model responses and overlook safety concerns identified by human experts. To probe failure modes more directly, we construct CounselBench-Adv, an adversarial dataset of 120 expert-authored mental health questions designed to trigger specific model issues. Expert evaluation of 1,080 responses from nine LLMs reveals consistent, model-specific failure patterns. Together, CounselBench establishes a clinically grounded framework for benchmarking LLMs in mental health QA.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

CounselBench large-scale benchmark with 2000 expert evaluations and 120 adversarial questions for evaluating LLMs in mental health question answering.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • CounselBench-EVAL with 2000 expert evaluations across six clinically grounded dimensions with span-level annotations
  • CounselBench-ADV with 120 expert-authored adversarial questions and 1080 responses to probe failure modes
  • Framework identifying recurring LLM issues including unconstructive feedback, overgeneralization, and limited personalization
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • expert evaluation
  • adversarial testing
  • clinical assessment
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • CounselChat patient questions from public forum
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Publicly available clinical data extremely limited due to privacy protections
    from the paper
  • Current design for open-ended QA, not dialogue settings
    from the paper
Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Extend to multi-turn dialogue with turn-by-turn evaluation and interactional dynamics
    from the paper
  • Incorporate simulated patient agents to scaffold coherent multi-turn adversarial interactions
    from the paper

Author keywords

  • large language models
  • mental health
  • human evaluation

Related orals

Something off? Let us know →