"OpenAI was plugging Claude into its own internal tools using special developer access (APIs)"
Unless it's actually some internal Claude API which OpenAI were using with an OpenAI benchmarking tool, this sounds like a hyped-up way for Wired to phrase it.
Almost like: `Woah man, OpenAI HACKED Claude's own AI mainframe until Sonnet slammed down the firewall man!` ;D
Seriously though, why phrase API use of Claude as "special developer access"?
I suppose that it's reasonable to disagree on what is reasonable for safety benchmarking, e.g: where you draw a line and say, "hey, that's stealing" vs "they were able to find safety weak spots in their model". I wonder what the best labs are like at efficiently hunting for weak areas!
Funnily enough I think Anthropic have banned a lot of people from their API, myself included - and all I did was see if it could read a letter I got, and they never responded to my query to sort it out! But what does it matter if people can just use OpenRouter?
I agree, it does sound like they're hyping it up. But maybe the author was confused. In the API ecosystem, there are special APIs that some customers get access to that the normal riff-raff do not. If someone called those "special developer access" I don't think it'd be wrong
They banned my account completely for violation of ToS and never responded to my query, following my 3 or 4 chats with Claude where i asked for music and sci-fi books recommendations.
Never violated ToS, account created through they ui, used literally few times.
Well, I don't use them at all except for very rare tests through open router indeed.
Well I don't think Anthropic are morons, that's not the point I was making.
Yes, I'm frustrated with Anthropic killing my direct API account for silly reasons, with no response. But actually I really appreciate Anthropic's models for code, their deep safety research with Constitutional AI, interpretability studies etc.
They are certainly guilty of having scaling and customer service issues, and making the wrong call with a faulty moderation system (for you too, and many others it seems like)!
But a lot of serious AI safety research that could literally save all our skin is being done by Anthropic, some of the best.
On OpenAI's API Platform, I am on Tier 5! It's unfortunate Anthropic have acted less commercially savvy than OpenAI (at the time, at least). I have complained on HN and I think on Twitter before about my account to no avail, after emailing before.
But yeah, usually I just use them via OpenRouter these days, it's a shame that I must use it for API access.
I get the impression that a lot of OpenAI researchers went to Anthropic, which essentially is the first OpenAI splinter group.
I think this is a sign of a serious, more healthy intellectual culture. I'm looking forward to seeing what they do next.
I’ve unfortunately had the same thing happen to me and am trying to run the gauntlet of getting a response.
Even more annoying is that I suspect its an issue linked to Google SSO and IP configurations.
I’m personally a big fan of Anthropic taking a more conservative approach compared to other tech companies that insist it’s not their responsibility - this is just a natural follow on where we get a lot of false positives.
Having said that desparate for my account to be unbanned so I can use it again!
> Seriously though, why phrase API use of Claude as "special developer access"?
Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app
If I'm an OpenAI employee, and I use Claude Code via the API, I'm not doing some hacker-fu, I'm just using a tool a company released for the purpose they released it.
I understand that they were technically "using it to train models", which, given OpenAI's stance, I don't have much sympathy for, but it's not some "special developer hackery" that this is making it sound like.
Because it's not "special developer access". It's just "normal developer access". The phrasing gives an impression they accessed something other users cannot.
> According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models”
That's... quite a license term. I'm a big fan of tools that come with no restrictions on their use in their licenses. I think I'll stick with them.
For years it was a license violation to use Microsoft development tools to build a word processor or spreadsheet. It was also a violation of your Oracle license to publish benchmark results comparing Oracle to other databases.
If you compete with a vendor, or give aid and comfort to their competitors, do not expect the vendor to play nice with you, or even keep you on as a customer.
These anti-competitive clauses are becoming standard across all major AI providers - Google, Microsoft, and Meta have similar terms. The industry is converging on a licensing model that essentially creates walled gardens for model development.
"Everyone else is doing it" doesn't make it right.
It also makes it dangerous to become dependent on these services. What if at some point in the future, your provider decides that something you make competes with something they make, and they cut off your access?
Oh I wonder if that applies to me? I've been using claude to do experiments with using SNN's for language models. Doubt anything will come of it... has mostly just been a fun learning experience, but it is technically a "competing product" (one that doesn't work yet)
Exactly this. Strange that this comment got downwoted. AI companies are scrapping the entire internet disregarding copyright and pirating books. Without it, models will be useless.
Good luck with that! Most of the relevant model providers include similar terms (Grok, OpenAI, Anthropic, Mistal, basically everyone with the exception of some open model providers).
The article does not say anything substantial, but just some opposite viewpoints.
1/ Openai's technical staff were using Claude Code (API and not the max plans).
2/ Anthropic's spokesperson says API access for benchmarking and evals will be available to Openai.
3/ Openai said it's using the APIs for benchmarking.
I guess model benchmarking is fine, but tool benchmarking is not. Either openai were trying to see their product works better than Claude code (each with their own proprietary models) on certain benchmarks and that is something Anthropic revoked. How they caught it is far more remarkable. It's one thing to use sonnet 4 to solve a problem on Livebench, its slightly different to do it via the harness where Anthropic never published any results themselves. Not saying this is a right stance, but this seems to be the stance.
Feels like something a Jepsen or such should be doing instead of competitors trying to clock each other directly. I can see why they would feel uncomfortable about this situation.
>Nulty says that Anthropic will “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.”
This is, ultimately, a great PR move by Anthropic. 'We are so good Open Ai uses us over their own'
They know full well OpenAI can just sign up again, but not under an official OpenAi account.
OpenAI Services Agreement: "Customer will not [...] use Output to develop artificial intelligence models that compete with OpenAI’s products and services"
Didn't a whole bunch of AI companies make the news that they refuse to respect X law in AI training. So far, X has been:
* copyright law
* trademark law
* defamation law (ChatGTP often reports wrong facts about specific people, products, companies, ... most seriously claiming someone was guilty of murder. Getting ChatGPT to say obviously wrong things about products is trivial)
* contract law (bypassing scraping restrictions they had agreed as a compay beforehand)
* harassment (ChatGPT made pictures depicting specific individuals doing ... well you can guess where this is going. Everything you can imagine. Women, of course. Minors. Politics. Company politics ...)
So far, they seem to have gotten away with everything.
Not sure if you're serious... you think OpenAI should be held responsible for everything their LLM ever said? You can't make a token generator unless the tokens generated always happen to represent factual sentences?
Most of these things are extremely valuable for humanity and therefore I think it's a very good thing that we have had a light touch approach to it so far in the west.
e.g. ignoring that I find that openai, Google, and anthropic in particular do take harassment and defamation extremely seriously (it takes serious effort to get chatgpt to write a bob saget joke let alone something seriously illegal), if they were bound by "normal" law it would be a sepulchral dalliance with safety-ism that would probably kill the industry OR just enthrone (probably) Google and Microsoft as the winners forever.
Presumably an AI should know about the trademarks, they are part of the world too. There is no point shielding LLMs from trademarks in the wild. A text editor can also infringe trademarks, depending how you use it. AI is taking its direction from prompts, humans are driving it.
I don’t understand why OpenAI would ever want to “take cues from Claude.” As a heavy user, I honestly find Claude’s tone way too much like a customer service bot—no emotion, no stance, and constantly saying things like “it depends on your perspective.”
I use AI to get direct, specific, and useful responses—sometimes even to feel understood when I can’t fully put my emotions into words. I don’t need a machine that keeps circling around the point and lecturing me like a polite advisor.
If ChatGPT ever starts sounding like Claude, I might seriously reconsider whether I still want to use it.
Please, OpenAI — don’t make ChatGPT sound like Claude. I’m not here for overly cautious, vague “customer service” replies. I’m here for clarity, precision, and something that actually feels like it understands me. If GPT turns into Claude, I’m seriously out.
GPT is what I go to when I want to talk and think. Claude is what I use when I need code or research. If OpenAI turns GPT into Claude… then what am I left with?
I’m not asking for GPT to beat Claude. I’m asking it to stay GPT — the one that can actually talk to me like a human, not just give sanitized textbook responses.
Unless it's actually some internal Claude API which OpenAI were using with an OpenAI benchmarking tool, this sounds like a hyped-up way for Wired to phrase it.
Almost like: `Woah man, OpenAI HACKED Claude's own AI mainframe until Sonnet slammed down the firewall man!` ;D Seriously though, why phrase API use of Claude as "special developer access"?
I suppose that it's reasonable to disagree on what is reasonable for safety benchmarking, e.g: where you draw a line and say, "hey, that's stealing" vs "they were able to find safety weak spots in their model". I wonder what the best labs are like at efficiently hunting for weak areas!
Funnily enough I think Anthropic have banned a lot of people from their API, myself included - and all I did was see if it could read a letter I got, and they never responded to my query to sort it out! But what does it matter if people can just use OpenRouter?
They banned my account completely for violation of ToS and never responded to my query, following my 3 or 4 chats with Claude where i asked for music and sci-fi books recommendations.
Never violated ToS, account created through they ui, used literally few times.
Well, I don't use them at all except for very rare tests through open router indeed.
Yes, I'm frustrated with Anthropic killing my direct API account for silly reasons, with no response. But actually I really appreciate Anthropic's models for code, their deep safety research with Constitutional AI, interpretability studies etc.
They are certainly guilty of having scaling and customer service issues, and making the wrong call with a faulty moderation system (for you too, and many others it seems like)!
But a lot of serious AI safety research that could literally save all our skin is being done by Anthropic, some of the best.
On OpenAI's API Platform, I am on Tier 5! It's unfortunate Anthropic have acted less commercially savvy than OpenAI (at the time, at least). I have complained on HN and I think on Twitter before about my account to no avail, after emailing before. But yeah, usually I just use them via OpenRouter these days, it's a shame that I must use it for API access.
I get the impression that a lot of OpenAI researchers went to Anthropic, which essentially is the first OpenAI splinter group. I think this is a sign of a serious, more healthy intellectual culture. I'm looking forward to seeing what they do next.
Even more annoying is that I suspect its an issue linked to Google SSO and IP configurations.
I’m personally a big fan of Anthropic taking a more conservative approach compared to other tech companies that insist it’s not their responsibility - this is just a natural follow on where we get a lot of false positives.
Having said that desparate for my account to be unbanned so I can use it again!
Isn't that precisely what an API is? Normal users do not use the API. Other programs written by developers use it to access Claude from their app. That's like asking why is an SDK phrased as a special kit for developers to build software that works with something they wish to integrate into their app
I understand that they were technically "using it to train models", which, given OpenAI's stance, I don't have much sympathy for, but it's not some "special developer hackery" that this is making it sound like.
Isn’t half the schtick of LLMs making software development available for the layman?
It's Software Developer Kit, not Special Developer Kit ;-)
That's... quite a license term. I'm a big fan of tools that come with no restrictions on their use in their licenses. I think I'll stick with them.
If you compete with a vendor, or give aid and comfort to their competitors, do not expect the vendor to play nice with you, or even keep you on as a customer.
source?
Certainly a mindset befitting microsoft and Oracle, if I ever saw one.
I don’t believe it should be legal, but I see why they would be butt-hurt
These unknown companies called Microsoft, Oracle, Salesforce, Apple, Adobe, … et al have all had these controversies at various points.
It also makes it dangerous to become dependent on these services. What if at some point in the future, your provider decides that something you make competes with something they make, and they cut off your access?
They don't target and analyze specific user or organizations - that would be fairly nefarious.
The only exception would be if there are flags for trust and safety. https://support.anthropic.com/en/articles/8325621-i-would-li...
1/ Openai's technical staff were using Claude Code (API and not the max plans).
2/ Anthropic's spokesperson says API access for benchmarking and evals will be available to Openai.
3/ Openai said it's using the APIs for benchmarking.
I guess model benchmarking is fine, but tool benchmarking is not. Either openai were trying to see their product works better than Claude code (each with their own proprietary models) on certain benchmarks and that is something Anthropic revoked. How they caught it is far more remarkable. It's one thing to use sonnet 4 to solve a problem on Livebench, its slightly different to do it via the harness where Anthropic never published any results themselves. Not saying this is a right stance, but this seems to be the stance.
Deleted Comment
Deleted Comment
This is, ultimately, a great PR move by Anthropic. 'We are so good Open Ai uses us over their own'
They know full well OpenAI can just sign up again, but not under an official OpenAi account.
Live by the sword, die by the sword.
* copyright law
* trademark law
* defamation law (ChatGTP often reports wrong facts about specific people, products, companies, ... most seriously claiming someone was guilty of murder. Getting ChatGPT to say obviously wrong things about products is trivial)
* contract law (bypassing scraping restrictions they had agreed as a compay beforehand)
* harassment (ChatGPT made pictures depicting specific individuals doing ... well you can guess where this is going. Everything you can imagine. Women, of course. Minors. Politics. Company politics ...)
So far, they seem to have gotten away with everything.
Not sure if you're serious... you think OpenAI should be held responsible for everything their LLM ever said? You can't make a token generator unless the tokens generated always happen to represent factual sentences?
e.g. ignoring that I find that openai, Google, and anthropic in particular do take harassment and defamation extremely seriously (it takes serious effort to get chatgpt to write a bob saget joke let alone something seriously illegal), if they were bound by "normal" law it would be a sepulchral dalliance with safety-ism that would probably kill the industry OR just enthrone (probably) Google and Microsoft as the winners forever.
Presumably an AI should know about the trademarks, they are part of the world too. There is no point shielding LLMs from trademarks in the wild. A text editor can also infringe trademarks, depending how you use it. AI is taking its direction from prompts, humans are driving it.
I use AI to get direct, specific, and useful responses—sometimes even to feel understood when I can’t fully put my emotions into words. I don’t need a machine that keeps circling around the point and lecturing me like a polite advisor.
If ChatGPT ever starts sounding like Claude, I might seriously reconsider whether I still want to use it.