ClawBench: Can AI Agents Complete Everyday Online Tasks? Paper • 2604.08523 • Published 22 days ago • 261
Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents Paper • 2604.06132 • Published 24 days ago • 118
FORGE:Fine-grained Multimodal Evaluation for Manufacturing Scenarios Paper • 2604.07413 • Published 23 days ago • 95
GBQA: A Game Benchmark for Evaluating LLMs as Quality Assurance Engineers Paper • 2604.02648 • Published 28 days ago • 46
KnowU-Bench: Towards Interactive, Proactive, and Personalized Mobile Agent Evaluation Paper • 2604.08455 • Published 22 days ago • 47
ClawArena: Benchmarking AI Agents in Evolving Information Environments Paper • 2604.04202 • Published 26 days ago • 37
ClawsBench: Evaluating Capability and Safety of LLM Productivity Agents in Simulated Workspaces Paper • 2604.05172 • Published 25 days ago • 24
RubricBench: Aligning Model-Generated Rubrics with Human Standards Paper • 2603.01562 • Published Mar 2 • 63