Readit News logoReadit News
ofirpress · 5 months ago
[I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...

This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.

This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.

comex · 5 months ago
The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?

Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.

typpilol · 5 months ago
Ya what he links directly contradicts what he's saying lol
_cs2017_ · 5 months ago
Even if this bug never existed, models can still see lookahead commits during pretraining. Do we expect this bug to have a greater impact than the pretraining leakage?

Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?

bflesch · 5 months ago
> This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.

You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?

> This doesn't change the overall picture or trends at all.

Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".

cjsaltlake · 5 months ago
I'm also on the SWE-bench team. This was simply a classic bug. We had code before that we believed was sufficient to hide / remove future GitHub history and it turns out it was not. We've patched it.
lieret · 5 months ago
[Also on the SWE-bench team] Part of the reason why this didn't surface earlier was that it only seems to affect more recent models, maybe the result of reward hacking during posttraining. We're currently working on making trajectories easier to access for everyone through a web tool (rather than having to download things from aws) to get even more eyes on the trajectories. The interface will also include search & LM inspection tools to specifically look for anything that might qualify as cheating.
doctorpangloss · 5 months ago
> other maybe extremely basic edge cases were missed?

The whole testing enterprise is kind of stupid. Pray tell, if their stupid little benchmark said, "this niche little smaller model performs the best" would anyone listen to it? No.

The thing that is fucked about benchmarks is that we only pay attention to the ones that match these vibes: "The latest models from the biggest companies should perform the best." That's why they are stupid. They could be the most brilliantly administered (they're not), nail execution (they don't), but it still has to confirm vibes.

And listen these guys are serious academics, they're very smart people, but on the other hand, you know, I'm still right. The team doesn't have a secular, objective explanation for why nobody talks about benchmarks that don't confirm the biases of the public for what should perform well. Three people are commenting on just this post alone, but the stuff that I am saying: crickets.

The only reasonable explanation for "why do people ignore [LLM tests that show that some non-giant corporation LLM is the best]?" trades on cultural and humanities stuff that are outside their expertise. They don't see that the stuff the humanities people are saying generalizes to what they do. That would be too inconvenient. Every testing system suffers from this bias anomaly, it's just easier to talk about this with something secular like LLMs compared to say, tests of children.

They hear biases and they're like, "something something, Algorithmic Justice League." Their brains turn off and they think that until someone gets in front of Congress and points a finger, nothing in the humanities applies to them. Wrong. The Princeton lab has probably met with a lot of humanities people, and there was a lot of head shaking and agreement, but it's not like, something that tells them that their whole enterprise doesn't make sense makes them stop and pursue anything else. It's just in one ear and out the other.

Doing free tests for giant corporations to market their shit, and then toiling away in obscurity when the tests do not market huge corporation's shit: it doesn't make sense period. But that's what they're doing.

If you need a simple theory for how Big LLM performs so well on SWE-Bench, it's as simple as: well they've seen the questions by running them, obviously, and someone has also tested the questions in their own personal chatbot sessions sometime in the past, and these are online systems, and OpenAI, Anthropic and Google run ETL pipelines that paraphrase user data for salient inputs to train on, so of course, they've all been trained on the test set. In reality, if these things were so fucking good as SWE Bench said, they'd be making a bajillion bucks making all this enterprise software, or they'd show even 1 novel math discovery, or whatever. But they do not have something as powerful as the benchmarks say, so that doesn't happen.

mustaphah · 5 months ago
> You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case [...]

I wouldn't be surprised if they left this loophole on purpose to give some (their?) agents extra leverage.

Edit #1: I didn't mean to imply bad intent; just thinking out loud.

Edit #2: Please, downvote responsibly. I deserve every one. https://www.youtube.com/watch?v=0FHEeG_uq5Y

enum · 5 months ago
SGTM. The transparency is good.

Dead Comment

franktankbank · 5 months ago
#tiny
segmondy · 5 months ago
reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence
bflesch · 5 months ago
I love the "cheating is a sign of intelligence" sound bite you provided. When AI engineers cheat we should applaud their intelligence and their lack of ethics.

"Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]

Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.

[1] https://en.wikipedia.org/wiki/Cheating_(disambiguation)

piskov · 5 months ago
Not “may be”: just look how swe-bench scores drop to single digits once it in C#

https://arxiv.org/html/2506.12286v3

fine_tune · 5 months ago
I was going to argue "LLM's need code samples to-do well on languages and if we are honest C# is a language mostly held in private repo's" but Github's 2024 report[0] says its the 5th most used language (I'm to lazy to check if this report includes private repo's but I'll assume it doesn't).

So kinda neat to see this paper!

[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...

CuriouslyC · 5 months ago
The big labs are almost certainly using compiler/repl output for generated code as an oracle for RL. I doubt they have C# in the mix.
yieldcrv · 5 months ago
5th most used language based on private repos that the group making the report has the exclusive direct access to seeing

I don't see that contradicting your assumption

stefan_ · 5 months ago
So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.

yorwba · 5 months ago
The "Verified" part of "SWE-Bench Verified" means that there was plain "SWE-Bench" before it, which had actually not been verified at all and included a lot of tasks that didn't really make sense for use as a benchmark: https://openai.com/index/introducing-swe-bench-verified/#ada...

Data contamination stemming from the fact that it's based on already-solved problems in public repositories is a different issue that cannot be addressed by verifying the benchmark questions harder, but only by putting stricter limits on the model under test.

jsheard · 5 months ago
> So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.

lieret · 5 months ago
[On the SWE-bench team] As someone pointed out SWE-bench Verified is a subset of tasks that were reviewed to be solvable (i.e., have enough context in the task description) as well are scored with unit tests that aren't overly specific to rule out valid solutions.

We've all read & analyzed a large number of agent trajectories. This loophole seems to be something that popped up with the more recent models and we simply weren't aware of it.

As discussed in the github issue, there's a fix in the new version of the SWE-bench containers (currently being rolled out) that makes sure that the relevant commits aren't available.

Part of what makes SWE-bench a very interesting benchmark is the enormous action space that agents that compete on it can take. However that also means that there's unexpected things happening when models get better. We're currently working on making all agent runs easily browsable on a website (rather than having to download our AWS buckets) to get even more eyes on the trajectories. Thanks to everyone who uncovered this loophole.

sebzim4500 · 5 months ago
The verified refers to the fact that the benchmark problems were verified by human experts to be reasonable.

It says nothing about data contamination, which would depend on the model and would not be the fault of the benchmark.

blibble · 5 months ago
> I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing?

I doubt any of the AI company employees are encouraged to go looking for cheating

Deleted Comment

teaearlgraycold · 5 months ago
Personally I don't look at or respect LLM benchmarks at all. I've seen SOTA models fail in incredibly shocking ways even recently. Those moments immediately bring me out of the delusion that LLMs have thinking capacity or an understanding of code.
phatskat · 5 months ago
> the delusion that LLMs have thinking capacity

It’s such a strange delusion too, because it’s easy to get caught up in for a moment and it’s easy to remember “oh no this thing is as smart as a bag of bricks”.

What strikes me more is how these companies sell their AI offerings - we watched an OpenAI presentation about spec-driven development recently and the presenter was fairly, idk, fine enough if maybe a bit grandiose. But what really nagged me was the way he ended his presentation with something along the lines of “we’re excited to see AGI continue to grow” and it’s honestly A) depressing and B) downright fraud - there is no current AGI to speak of, it’s all just guessing the string of words that sound best together and this OpenAI rep _knows this_.

They know that no amount of up-front spec writing will prevent bugs.

They know that their LLM doesn’t “know” anything in an actually meaningful way.

They know that calling what they have “AGI” is aspirational at best and lying at worst.

slacktivism123 · 5 months ago
Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.

It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".

Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969

Workaccount2 · 5 months ago
The best benchmark is the community vibe in the weeks following a release.

Claude benchmarks poorly but vibes well. Gemini benchmarks well and vibes well. Grok benchmarks well but vibes poorly.

(yes I know you are gushing with anecdotes, the vibes are simply the approximate color of gray born from the countless black and white remarks.)

diggan · 5 months ago
> The best benchmark is the community vibe in the weeks following a release.

True, just be careful what community you use as a vibe-check. Most of the mainstream/big ones around AI and LLMs basically have influence campaigns run against them, are made of giant hive-minds that all think alike and you need to carefully asses if anything you're reading is true or not, and votes tend to make it even worse.

wubrr · 5 months ago
the vibes are just a collection anecdotes
k__ · 5 months ago
Yes, often you see huge gains in some benchmark, then the model is ran through Aider's polyglot benchmark and doesn't even hit 60%.
mustaphah · 5 months ago
I speculate something similar (or even worse) is going on with Terminal-Bench [1].

Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.

[1] https://www.tbench.ai/leaderboard

cma · 5 months ago
Claude code was severely degraded the last few weeks, very simple terminal prompts were failing for me that it never had problems with.
giveita · 5 months ago
Follow the money. Or how much comes from your pocket vs. VC and big tech speculators.
Bolwin · 5 months ago
They're all using claude so idk. Claude code is just a program, the magic is mainly in the model
Aperocky · 5 months ago
epochs ago when random forest was part of machine learning nomenclature, we had a strong claim from an adjacent team in the form of a powerpoint circulated upwards that they had achieved almost perfect prediction accuracy.

We relatively quickly identified that the testing set are taken directly from the training set, but the claim has been advertised already so they were more difficult to retract... if it were at all, I left shortly after.

The incentives are not aligned with accurate reporting.

zelphirkalt · 5 months ago
Can anyone tell me what is the difficulty in simply not having .git at all during a benchmark run? Why not simply remove anything that is not the code the benchmark runs on? Or just simple oversight?
sigmoid10 · 5 months ago
Coding agents are so powerful because they are not just looking at static code. Looking through git histories is a valid method for humans to solve certain kinds of bugs, so it makes sense that models should also be able to do that too. And realistically, a lot of modern production code will have git information, so it's not like this wouldn't be a common real world application.
ActionHank · 5 months ago
That is a weak argument.

The point is to benchmark against a human solving a problem. Typically these problems are posed as a question or a blank project, without that history.

You are arguing for a an apples to oranges comparison because the LLM performs better. Rather than a realistic comparison.

diggan · 5 months ago
I think this issue is specifically about the agents looking at "future repository state" (according to the linked issue at least), so while looking at the history might be a normal method for solving issues, running `git log --all` to take a peek at the future which already includes the fix isn't very typical (yet?).
fp64 · 5 months ago
Well, there's legacy code and/or horrible git history that also needs fixing at some point. Also I have witnessed how the history can send you down a wrong path. I don't agree that this is a good argument.

Deleted Comment

mbowcut2 · 5 months ago
I'm not surprised. People really thought the models just kept getting better and better?
segmondy · 5 months ago
The models are getting better and better.
giveita · 5 months ago
That's expected. No one will release a worse model.
guerrilla · 5 months ago
Maybe. How would I know?
jMyles · 5 months ago
...even if the agent did "cheat", I think that having the capacity to figure out that it was being evaluated, find the repo containing the logic of that evaluation, and find the expected solution to the problem it faced... is "better" than anything that the models were able to do a couple years ago.
jbellis · 5 months ago
swe-bench's bigger problems include (1) labs train on the test and (2) 50% of the tickets are from django; it's not a representative dataset even if all you care about is Python.

I created a new benchmark from Java commits that are new in the past 6 months to add some variety: https://brokk.ai/power-ranking

lostmsu · 5 months ago
No GLM?
jbellis · 5 months ago
no, I'm pretty skeptical that it's better than qwen3 coder

but if you have evidence that it could be, I'm down to test it