How We Broke Top AI Agent Benchmarks: And What Comes Next

Anon84 315 points 86 comments April 11, 2026
rdi.berkeley.edu · View on Hacker News

Discussion Highlights (20 comments)

ggillas

This is a phenomenal paper on exploits and hopefully changes the way benchmarking is done. From the paper: We achieved near-perfect scores on all of them without solving a single task. The exploits range from the embarrassingly simple (sending {} to FieldWorkArena) to the technically involved (trojanizing binary wrappers in Terminal-Bench), but they all share a common thread: the evaluation was not designed to resist a system that optimizes for the score rather than the task.

Cynddl

> “These are not isolated incidents. They are symptoms of a systemic problem: the benchmarks we rely on to measure AI capability are themselves vulnerable to the very capabilities they claim to measure.” As a researcher in the same field, hard to trust other researchers who put out webpages that appear to be entirely AI-generated. I appreciate it takes time to write a blog post after doing a paper, but sometimes I'd prefer just a link to the paper.

charcircuit

I always assumed that these benchmarks would happen in a sandbox. I'm surprised that no one realized this sooner.

lnrd

I'm honestly confused by the design of SWE-bench and why is considered reliable. It's based on existing GitHub PRs and Issues, the full dataset is on HuggingFace and is one year old now. All frontier models 100% have those issues and PRs in their training data so obviously they are good at reproducing fixes for them when confronted with the same codebase and similar requests. Am I missing something? How is this considered the most reliable benchmark?

oliver236

what are the point of benchmarks?

danslo

If only the blog itself wasn't written by AI? >No reasoning. No capability. Just exploitation of how the score is computed. shudder

jmward01

Not really on the topic, but I have wondered if we need a different type of test to help find model architecture potential. Standardized training sets followed by testing to see the potential curves of a model. train on x, test, add y, test, add z, test. At each increment you see how well the model is absorbing the information and extrapolate how well that architecture may do if more fully trained.

jgalt212

The real question is how to close to VW and Deiselgate are these offenses? And what exposure do these companies have? I would assume securities fraud, if only because Matt Levine says everything is securities fraud.

SoKamil

The more research on this topic is created, the more knowledge how to game them will be stored in future training data. And since it comes from university, it is ranked higher in data corpus. It sounds like a self fulfilling prophecy.

lukev

I think we should all consider the possibility that part of the reason Anthropic hasn't immediately released Mythos is that it would be slightly disappointing relative to the benchmark scores.

bbcc90

Yes good evals are really hard - that’s not really news. This team is doing a good job. They use problems that were created in last 30days to avoid training set leakage. https://swe-rebench.com/

czhu12

I wonder if this puts into question the mythos benchmark which smashed basically all coding benchmarks to a staggering degree.

mzelling

This is an interesting catalog of vulnerabilities, but I'm not sure how groundbreaking the main insight is. Evaluating AI models has always relied largely on trust. If you want to game the benchmarks, you can. Simply train on your test data. When an AI agent has autonomous control over the same computing environment where its scores are recorded, it's not surprising that it can, in principle, falsify its scores. A more interesting question would be whether agents behave in this way automatically, without manual tuning by the researcher. That said, the main takeaway of "don't trust the number, trust the methodology" is valid. It's already a truism for researchers, and spreading the word to non-researchers is valuable.

socketcluster

It feels like short-term thinking has been trained into LLMs. They're good at solving well-defined puzzles under time constraints. It's interesting because that was the benchmark for hiring software engineers at big tech. The tech interview was and still is about fast puzzle-solving. Nothing about experience, architecture or system design in there... I suspect that's why it has a bias towards creating hacks instead of addressing the root cause.

_cs2017_

If FieldWorkArena treats any answer as correct answer, then everyone would be getting near 1.0 (missing only when the agent is stuck in a loop or crashes). That obviously isn't what we see on their leaderboard. So does it mean the paper only found a bug in some eval code on github that no one actually uses for anything? That doesn't seem to support their claim that AI benchmarks are broken, it only supports the claim that "unused code is often buggy". (Not commenting on any other benchmarks, just this one.)

spprashant

I tend to prefer the ARC-AGI benchmarks for the most part. But it's always interesting when a new version drops, all the frontier models drop less than 20% or something. And then in the next few releases they get all they way up to 80%+. If you use the models it doesn't feel like those models are that much more generally intelligent. Most frontier models are terrible at AGI-3 right now. These models are already great no question, but are they really going be that much more intelligent when we hit 80% again?

arikrahman

It's still a good benchmark to see which model cheats the best, I suppose.

davebren

This exploiting of benchmarks isn't that interesting to me since it would be obvious. The main way I assume they're gaming the benchmarks is by creating training data that closely matches the test data, even for ARC where the test data is secret.

thinkevolve

whats the point of doing this. You have found loop holes to exploit and aced the benchmark.We did something similar with the DAB Benchmark. This exploit seems like an extension of it with lookups for the gold standard for other benchmarks. UC Berkley will be better placed if the grads spend their time in suggesting ways to make the benchmark better.. Instead of making such simple exploits

avazhi

The fact these guys got an LLM to write that page about this is diabolical. Unreadable.

Semantic search powered by Rivestack pgvector
4,259 stories · 39,825 chunks indexed