Readit News logoReadit News
kirktrue · 8 months ago
Unless I am repeatedly missing it, it’s not mentioned in the article how much money the researchers spent performing the tests. What was the budget for the AI execution? If the researchers only spent $10,000 to “earn” $400,000, that’s amazing, whereas if they spent $500,000 for the same result, that’s obviously less exciting.
amelius · 8 months ago
And did they actually earn anything, or did they just evaluate the performance and linked it to a fee?
victorbjorklund · 8 months ago
Totally. Solving the coding task is just half the challenge. You still got to win the job, etc
josefresco · 8 months ago
This resonated with me based on my recent experience using Claude to help me code. I almost gave up, but re-phrased the initial request (after 7-10 failed tries) and it finally nailed it.

> 3. Performance improves with multiple attempts Allowing the o1 model 7 attempts instead of 1 nearly tripled its success rate, going from 16.5% to 46.5%. This hints that current models may have the knowledge to solve many more problems but struggle with execution on the first try.

https://newsletter.getdx.com/i/160797867/performance-improve...

Suppafly · 8 months ago
I haven't really messed with Claude or other programming AIs much, but when using chatgpt for random stuff, it seems like the safety rails end up blocking a lot of stuff and rephrasing to get around them is necessary. I wonder if some of these programming AIs would be more useful if it some of the context that causes them to produce invalid results was more obvious to the users.
runlaszlorun · 8 months ago
> safety rails end up blocking a lot of stuff

curious if you had any examples. i'm fairly meh on llm coding myself but have a pet theory on safety rails. i've certainly hit plenty myself but not with coding with llm's.

dboreham · 8 months ago
How do they know the tasks were "solved"? Wouldn't that require the customer to be happy, and pay the bounty?
fxtentacle · 8 months ago
It's an OpenAI ad... And BTW the actual paper says: "we [..] find that frontier models are still unable to solve the majority of tasks"
jsnell · 8 months ago
Honestly, this reads like an AI-generated summary.

Discussion on original paper: https://news.ycombinator.com/item?id=43086347

amelius · 8 months ago
There goes all the low-hanging fruit ...
112233 · 8 months ago
Wait a bit. The work for IT cleanup crews that will be needed to mop up all the vibe-damage from the locust swarm of greedy binheads that are currently puking imitation code with bugs and issues no man has seen before will eventually be plentiful. (if there will be server on server left standing after all this)
tempire · 8 months ago
No
cmsj · 8 months ago
tl;dr, and as Betteridge's Law would lead you to believe, the answer is no.
Suppafly · 8 months ago
>Betteridge's Law

Is that the one that says if an article title ends in a question that means the answer is no?

amelius · 8 months ago
Yes, and it especially works well for questions that sound too good to be true.