AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories
Abstract
Web agents enable users to perform tasks on web browsers through natural language interaction. Evaluating web agents trajectories is an important problem, since it helps us determine whether the agent successfully completed the tasks. Rule-based methods are widely used for this purpose, but they are challenging to extend to new tasks and may not always recognize successful trajectories. We may achieve higher accuracy through human evaluation, but the process would be substantially slower and more expensive. Automatic evaluations with LLMs may avoid the challenges of designing new rules and manually annotating trajectories, enabling faster and cost-effective evaluation. However, it is unclear how effective they are at evaluating web agents. To this end, we propose AgentRewardBench, the first benchmark to assess the effectiveness of LLM judges for evaluating web agents. AgentRewardBench contains 1302 trajectories across 5 benchmarks and 4 LLMs. Each trajectory in AgentRewardBench is reviewed by an expert, who answers questions pertaining to the success, side effects, and repetitiveness of the agent. Using our benchmark, we evaluate 12 LLM judges and find that no single LLM excels across all benchmarks. We also find that the rule-based evaluation used by common benchmarks tends to underreport the success rate of web agents, highlighting a key weakness of rule-based evaluation and the need to develop more flexible automatic evaluations. We release the benchmark at: https://agent-reward-bench.github.io
Community
AgentRewardBench
AgentRewardBench is a benchmark for assessing the effectiveness of automatic evaluation methods (such as LLM judges) for web agent trajectories. By comparing with human annotations across 5 web benchmarks, we can use AgentRewardBench to determine which LLM is the most capable at evaluating web agents
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- `A la recherche du sens perdu: your favourite LLM might have more to say than you can understand (2025)
- LLM-Enhanced Dialogue Management for Full-Duplex Spoken Dialogue Systems (2025)
- Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking (2025)
- Evaluating Language Models on Grooming Risk Estimation Using Fuzzy Theory (2025)
- "Nuclear Deployed!": Analyzing Catastrophic Risks in Decision-making of Autonomous LLM Agents (2025)
- Jointly Assigning Processes to Machines and Generating Plans for Autonomous Mobile Robots in a Smart Factory (2025)
- Strength Estimation and Human-Like Strength Adjustment in Games (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper