ICLR 2026 Orals

SimuHome: A Temporal- and Environment-Aware Benchmark for Smart Home LLM Agents

Gyuhyeon Seo, Jungwoo Yang, Junseong Pyo, Nalim Kim, Jonggeun Lee, Yohan Jo

LLMs & Reasoning Fri, Apr 24 · 3:39 PM–3:49 PM · 204 A/B Avg rating: 6.00 (4–8)

Abstract

We introduce $\textbf{SimuHome}$, a high-fidelity smart home simulator and a benchmark of 600 episodes for LLM-based smart home agents. Existing smart home benchmarks treat the home as a static system, neither simulating how device operations affect environmental variables over time nor supporting workflow scheduling of device commands. SimuHome is grounded in the Matter protocol, the industry standard that defines how real smart home devices communicate and operate. Agents interact with devices through SimuHome's APIs and observe how their actions continuously affect environmental variables such as temperature and humidity. Our benchmark covers state inquiry, implicit user intent inference, explicit device control, and workflow scheduling, each with both feasible and infeasible requests. For workflow scheduling, the simulator accelerates time so that scheduled workflows can be evaluated immediately. An evaluation of 18 agents reveals that workflow scheduling is the hardest category, with failures persisting across alternative agent frameworks and fine-tuning. These findings suggest that SimuHome's time-accelerated simulation could serve as an environment for agents to pre-validate their actions before committing them to the real world.

One-sentence summary·Auto-generated by claude-haiku-4-5-20251001(?)

SimuHome introduces Matter protocol-grounded smart home simulator and 600-episode benchmark evaluating LLM agents on device control and workflow scheduling.

Contributions·Auto-generated by claude-haiku-4-5-20251001(?)
  • SimuHome high-fidelity simulator grounded in Matter protocol industry standard
  • Benchmark of 600 episodes covering state inquiry, intent inference, device control, and workflow scheduling
  • Time-accelerated simulation enabling evaluation of scheduled workflows immediately
Methods used·Auto-generated by claude-haiku-4-5-20251001(?)
  • Smart home simulation
  • LLM agents
  • Matter protocol
  • Interactive systems
Datasets used·Auto-generated by claude-haiku-4-5-20251001(?)
  • SimuHome benchmark
Limitations (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)

Authors did not state explicit limitations.

Future work (author-stated)·Auto-generated by claude-haiku-4-5-20251001(?)
  • Use time-accelerated simulation as environment for agents to pre-validate actions before committing to real world
    from the paper
  • Enable agents to learn through trial and error inside simulator rather than imitating recorded examples
    from the paper

Author keywords

  • smart home
  • simulator
  • language model
  • language agent
  • benchmark

Related orals

Something off? Let us know →