This paper tackles a gap most LLM benchmarks miss: how agentic systems behave after deployment, when errors compound, tools fail, and outputs drift over time. It proposes a production-oriented evaluation frame that should be useful for teams shipping long-running agents.
arXiv:2605.01604v1 Announce Type: new Abstract: Existing evaluation frameworks for large language models -- including HELM, MT-Bench, AgentBench, and BIG-bench -- are designed for controlled, single-session, lab-scale settings. They do not address the evaluation challenges that emerge when agentic AI systems operate continuously in production: compounding decision errors, tool failure cascades, non-deterministic output drift, and the absence of ground truth for long-horizon tasks. This paper…