Readit News logoReadit News
mike_hearn · 3 months ago
The most interesting thing about this is the apparent absence of unit tests. The test for the XLA compiler bug just prints the outputs, it's more like a repro case than a unit test in the sense that it'd be run by a test harness and have coverage tracked. And the action items are simply to lean more aggressively into evals.

Although unit testing an entire LLM is not really feasible right now, all these bugs were in small deterministic parts of the system. Load balancing, top-k probability calculations and so on are all engineered parts no different to other software, and should in principle all be unit testable. At most you need an injectable PRNG. Yes, non-deterministic optimization bugs are awful but I've personally found compiler and database bugs in the past using just regular app test suites. With CI you get a lot of runs so rare events can still surface as long as you investigate flakes. One of my current projects runs every unit test in the same process in parallel, which has proven an excellent and cheap strategy for flushing out rare thread safety issues and database deadlocks.

A few days ago I commented on a thread about the Java launch that people often feel Java is "enterprisey" compared to Python because Java code is typically written to be heavily unit testable. A lot of abstraction is driven by the desire for dependency injection, for example. I contrasted that to scripting language culture where I've found testing is often either missing or kinda surface level (e.g. mostly just asserting on types).

When I've been learning PyTorch a few years ago I noticed the same thing. The tutorials took you from simple to complex stuff without talking much about how you test or best structure the code. This makes sense for ML research where you don't have a clear goal and success boils down to maxing a score in some kind of eval, but it doesn't make sense for production deployment at scale.

I wonder if the AI labs could use more people with SRE and HA SWE background to focus on things like this. I'm kinda skeptical that more aggressive rolling evals-in-prod are the best way to avoid bugs like these happening again.

vintagedave · 3 months ago
I've had to write some detailed prompts and examples to have AI generate the kind of unit tests I want in Python. I've seen the assertions on types alone too. I want assertions on values and more.

Even more than that, AI tends to mock _everything_. Mocking is useful, but the more real code a unit test invokes, the better, because the risk is not only the code itself but its interactions, the interface. Yet AI in Python will mock so heavily it barely tests even the code itself, with tautological statements.

I've prompted with heavy warnings against mocking and pointing directly at examples of thorough tests as examples. FWIW, Python does have excellent tools for injection and can write really nicely structured code.

redman25 · 3 months ago
I wish I had 100 upvotes to give you. Weak, heavily mocked tests are my biggest pet peave. Test “quality” is important and not something a lot of devs pay attention to.

I’ve found myself preferring integration tests or unit tests with a “real” database set up because the tests are much more effective. If you design them right, they don’t even need to be slower.

bobbylarrybobby · 3 months ago
When asked to write UI tests (playwright), I've seen Claude Code do essentially the following:

const elem = document.querySelector(".foo"); // actual element that exists elem.innerHTML = '<div class="bar"></div>'; const child = elem.locator(".bar"); // child we actually want to test for expect(child).toExist()

Gee thanks Claude, what a great test...

andoando · 3 months ago
Mocked tests also make refactoring a pain in the ass.

This is why I heavily prefer integration tests

mike_hearn · 3 months ago
I'm curious how you structure your Python to be well testable. I have to admit, my own use of Python has been limited to scripts and (a long time ago) a game engine, not large codebases. So unit testing for those hardly came up.

It seems there's a couple of dependency injection frameworks but they're clones of what's found in Java, right down to the type names. One of them even calls injectable objects beans! (Rhazes)

HoyaSaxa · 3 months ago
I’m pretty surprised that Anthropic can directly impact the infra for AWS Bedrock as this article suggests. That goes against AWSs commitments. I’m sure the same is true for Google Vertex but I haven’t digged in there from a compliance perspective before.

> Our own privacy practices also created challenges in investigating reports. Our internal privacy and security controls limit how and when engineers can access user interactions with Claude, in particular when those interactions are not reported to us as feedback.

Ok makes sense and glad to hear

> It remains particularly helpful for users to continue to send us their feedback directly. You can use the /bug command in Claude Code

Ok makes sense and I’d expect that a human can then see the context in that case although I hope it is still very explicit to the end user (I’m not a Claude Code user so I cannot comment)

> or you can use the "thumbs down" button in the Claude apps to do so

This is pretty concerning. I can’t imagine the average person equates hitting this button with forfeiting their privacy.

l1n · 3 months ago
(Anthropic employee, speaking in a personal capacity)

> I’m pretty surprised that Anthropic can directly impact the infra for AWS Bedrock as this article suggests.

We don't directly manage AWS Bedrock deployments today, those are managed by AWS.

> I can’t imagine the average person equates hitting this button with forfeiting their privacy.

We specify

> Submitting this report will send the entire current conversation to Anthropic for future improvements to our models.

in the thumbs down modal. Is there a straightforward way to improve this copy?

crazygringo · 3 months ago
Sounds fine to me. I'm assuming it wasn't obvious to readers that there was a confirmation message that appears when thumbs down is clicked.
HoyaSaxa · 3 months ago
> We don't directly manage AWS Bedrock deployments today, those are managed by AWS.

That was my understanding before this article. But the article is pretty clear that these were "infrastructure bugs" and the one related to AWS Bedrock specifically says it was because "requests were misrouted to servers". If Anthropic doesn't manage the AWS Bedrock deployments, how could it be impacting the load balancer?

pluto_modadic · 3 months ago
"have a human take a look at this conversation (from {time} to {time})"
_da_ · 3 months ago
> This is pretty concerning. I can’t imagine the average person equates hitting this button with forfeiting their privacy.

When you click "thumbs down" you get the message "Submitting this report will send the entire current conversation to Anthropic for future improvements to our models." before you submit the report, I'd consider that pretty explicit.

HoyaSaxa · 3 months ago
Great to hear. I'm not a Claude user and the article did not make it seem that way.
am17an · 3 months ago
They must really be having a bad time if Anthropic of all labs is willing to share their infra details. On the actual precision bug, it is quite unfortunate on FMA side, numerical issues are often deeply bewildering and no AI can solve them (yet.) Also goes to show, if you are in a super crunch situation like this one (competitor literally eating your lunch every day), you need humans to understand what went wrong and even then it can take weeks to rectify.
cyanf · 3 months ago
> On August 29, a routine load balancing change unintentionally increased the number of short-context requests routed to the 1M context servers. At the worst impacted hour on August 31, 16% of Sonnet 4 requests were affected.

Interesting, this implies that the 1M context servers performs worst at low context. Perhaps this is due to some KV cache compression, eviction or sparse attention scheme being applied on these 1M context servers?

kiratp · 3 months ago
This is due to RoPE scaling.

> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required. It is also recommended to modify the factor as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set factor as 2.0.

https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking

FossQuestion · 3 months ago
The key issue is that their post-mortem never explained what went wrong on two out of three issues.

All I know is that my requests can now travel along three completely different code paths, each on its own stack and tuned differently. Those optimizations can flip overnight, independent of any model-version bump—so whatever worked yesterday may already be broken today.

I really don't get the praise that they are getting for this postmortem, it only made me more annoyed.

Dead Comment

data-ottawa · 3 months ago
With all due respect to the Anthropic team, I think the Claude status page[1] warrants an internal code red for quality. There were 50 incidents in July, 40 incidents in August, and 21 so far in September. I have worked in places where we started approaching half these numbers and they always resulted in a hard pivot to focusing on uptime and quality.

Despite this I'm still a paying customer because Claude is a fantastic product and I get a lot of value from it. After trying the API it became a no brainer to buy a 20x Max membership. The amount of stuff I've gotten done with Claude has been awesome.

The last several weeks have strongly made me question my subscription. I appreciate the openness of this post, but as a customer I'm not happy.

I don't trust that these issues are all discovered and resolved yet, especially the load balancing ones. At least anecdotally I notice that around 12 ET (9AM pacific) my Claude Code sessions noticeably drop in quality. Again, I hope the team is able to continue finding and fixing these issues. Even running local models on my own machine at home I run into complicated bugs all the time — I won't pretend these are easy problems, they are difficult to find and fix.

[1] https://status.anthropic.com/history

ruszki · 3 months ago
I don’t know whether they are better or worse than others. One for sure, a lot of companies lie on their status pages. I encounter outages frequently which are not reported on their status pages. Nowadays, I’m more surprised when they self report some problems. Personally, I didn’t have serious problems with Claude so far, but it’s possible that I was just lucky. In my perspective, it just seems that they are reporting outages in a more faithful way. But that can be completely coincidental.
martinald · 3 months ago
What makes it even worse is the status page doesn't capture all smaller incidents. This is the same for all providers. If they actually provided real time graphs of token latency, failed requests, token/s etc I think they'd be pretty horrific.

If you trust this OpenRouter data the uptime record of these APIs is... not good to say the least: https://openrouter.ai/openai/gpt-5/uptime

It's clear to me that every provider is having enormous scale challenges. Claude Code often slows to a crawl and I have to interrupt it and tell it to try again.

This is especially pronounced around 4-6pm UK time (when we have Europe, Eastern US and West Coast US all hammering it).

Even today I was getting 503 errors from Gemini AI studio with model overloaded at that time, nothing on status page.

I really wonder if it would be worth Claude et al offering a cheaper off peak plan, to try and level out demand. Perhaps the optics of that don't look good though.

Edit to add: I think another potential dimension to this is GB200s have been a lot slower to come on stream than probably the industry expected. There's been a lot of defects with various hardware and software components and I suspect the liquid cooling has been difficult to get right (with far more catastrophic failure states!).

Maxious · 3 months ago
Artificial Analysis also monitor LLM provider APIs independently "based on 8 measurements each day at different times" you can see the degradation as opus 4.1 came online https://artificialanalysis.ai/providers/anthropic#end-to-end...
l1n · 3 months ago
> Claude et al offering a cheaper off peak plan We do offer Batch Processing today - https://docs.claude.com/en/docs/build-with-claude/batch-proc...
willsmith72 · 3 months ago
> Despite this I'm still a paying customer because Claude is a fantastic product and I get a lot of value from it.

Doesn't that say it all? At this point the quality of the AI trumps reliability for the customer (you and me), so even though of course they should (and I'm sure will) focus on it, why would they prioritise reliability over model quality right now?

edoceo · 3 months ago
The up-theead complaint is that quality drops and draws a line to reliability. They (Anthropx) have two hard problems to solve.
lumost · 3 months ago
I've become extremely nervous about these sudden declines in quality. Thankfully I don't have a production product using AI (yet), but in my own development experience - the model becoming dramatically dumber suddenly is very difficult to work around.

At this point, I'd be surprised if the different vendors on openrouter weren't abusing their trust by silently dropping context/changing quantization levels/reducing experts - or other mischievous means of delivering the same model at lower compute.

martinald · 3 months ago
Openrouter is aware this is happening and flags it now on the UI. It's a real problem.
renewiltord · 3 months ago
This is always why you should put as few incidents on status page as possible. People's opinion will drop and then the negative effect will fade over time. But if you have a status page then it's incontrovertible proof. Better to lie. They'll forget.

e.g. S3 has many times encountered increased error rate but doesn't report. No one says anything about S3.

People will say many things, but their behaviour is to reward the lie. Every growth hack startup guy knows this already.

lanstin · 3 months ago
S3 autoscales, so any time the load increases you can see 5xx and 429 errors, but it flexes up in a few hours. That’s not exactly an incident, sort of Works as Designed.

The first time you write a multithreaded utility to do something in account with S3 you will see this, and have to write the temporary back off code.

pnutjam · 3 months ago
Yup, these guys aren't the customers anyway. The investors are the only ones they care about because the customers don't come close to paying the actual costs.
data-ottawa · 3 months ago
It’s good they update the status page, but the issues are noticeable without it.
Wowfunhappy · 3 months ago
> On August 25, we deployed a misconfiguration to the Claude API TPU servers that caused an error during token generation. An issue caused by a runtime performance optimization occasionally assigned a high probability to tokens that should rarely be produced given the context, for example producing Thai or Chinese characters in response to English prompts, or producing obvious syntax errors in code. A small subset of users that asked a question in English might have seen "สวัสดี" in the middle of the response, for example.

Can anyone explain to a layperson how this sort of thing is even possible for an LLM?

For normal code, of course stupid bugs happen all the time. You accidentally introduce an off-by-one error in a conditional, for example, or add an extra `goto fail`.

But LLMs aren't written by humans! Models are trained by automated programs over a period of many months across unfathomably massive data centers.

How would a human introduce a bug like the one described in TFA?

sailingparrot · 3 months ago
LLMs are still executed by code written by humans. In this case, the model ultimately give you a probability distribution over each (~200k) tokens in the vocabulary. It's then up to you to decide how you want to sample the next token, you could for example just always sample the most likely, or to make the output more creative, you can sample randomly from the top-k tokens. This top-k sampling, to make it efficient, is written in XLA and compiled to run directly as a kernel, there was a bug in that kernel, which presumably led to tokens outside of the top-k window be select from times to times.
Centigonal · 3 months ago
LLMs produce a probability distribution for what the next token might be. How you pick the actual word that is printed next from that probability distribution is by using a sampling approach[1]. If your sampling approach is "select the next word randomly from among the top 4 possibilities" and you flip a > sign, you could end up with the behavior described in the OP.

[1] Here is an example of two common approaches: https://www.reddit.com/r/AIDungeon/comments/1eppgyq/can_some...

jjmarr · 3 months ago
The next word can also be selected with weighted randomization and "temperature" is used to control how much weight lower probability tokens get.

I've honestly received the best results in creative writing by ignoring top_k/top_p and simply tuning temperature. Restricting my output to only common words causes everything to feel generic. But Deepseek constantly breaks into Chinese/gibberish/ZALGO! when I go to 1.14.

This isn't related to the "recent issues" but I feel like it's useful advice for anyone trying out AI story creation.

ashdksnndck · 3 months ago
There are many layers of human-written code in between you and the weights.
blackqueeriroh · 3 months ago
Simple answer: there are two separate processes here, training and inference.

As you discuss, training happens over a long period of time in a (mostly) hands-off fashion once it starts.

But inference? That’s a separate process which uses the trained model to generate responses, and it’s a runtime process - send a prompt, inference runs, response comes back. That’s a whole separate software stack, and one that is constantly being updated to improve performance.

It’s in the inference process where these issues were produced.

jldugger · 3 months ago
The AI kernels are floating point, so it's possible to do some unintuitive math that ends up negative even though it wouldn't be in the Real domain. I wouldn't be surprised if checking for overflow state is disabled for perf reasons and the negative simply becomes really big -- like asking for the -1st item in an array and getting the last.
zer00eyz · 3 months ago
If you are going to run a non deterministic system on three very different hardware platforms doesn't it behoove you to tell your users where their experience is coming from?

Calling the platforms A, B and C might help provide us the insight we're missing to spot incongruous behaviors faster than trying to aggregate more generalized feedback.

vlovich123 · 3 months ago
The value of figuring out how to make their LLM serving deterministic might help them track this down. There was a recent paper about how the received wisdom that kept assigning it to floating point associativity actually overlooked the real reasons for non-determinism [1].

[1] https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

ants_everywhere · 3 months ago
network traffic and machine load aren't deterministic. I think for the near term, getting full determinism (e.g. for auditing) is going to only be feasible for batch jobs that are not cost sensitive.

A google search isn't deterministic. Neither is loading upvote count on social media.

It's common advice in distributed systems to have a graceful degradation state instead of becoming unavailable. That wouldn't be possible in a system that's completely deterministic.

bangaladore · 3 months ago
Same input -> Model -> Same output

My understanding is non-determinism is usually due to explicitly introducing randomness while sampling. No reason why they couldn't use a static seed.

And if not, that's something that they should solve, unintentional non-determinism is a bug, not a feature.

vlovich123 · 3 months ago
Network traffic and machine load don’t usually impact the output of a pure (in the CS sense of purity) math function (which is what an LLM is) unless you’ve written your system to be sensitive to that.

> to have a graceful degradation state instead of becoming unavailable. That wouldn't be possible in a system that's completely deterministic.

What does this even mean? I see no incompatibility between determinism and your ability to perform the same function more slowly. Determinism just means that the output of the system is solely dependent on the inputs - feed the same inputs get the same outputs. If by degraded state you’re intentionally choosing to change your inputs, that doesn’t change the determinism of your system.

When it is said that LLMs aren’t deterministic, it’s because the output token is dependent on the inputs context and all other contexts processed in the same batch because the kernels are written non-deterministically. If the kernels were written deterministically (so that the output only depended on your input context), then there wouldn’t be a problem and it also wouldn’t change the ability for the system to degrade; it would be deterministic because capturing the input context and random seed would be sufficient. As it stands you’d have to capture the interim states of the other inputs being processed in the same batch and that interim state problem is what makes it non deterministic.

As for Google search, it’s not clear to me it’s non-deterministic. When you Google the exact same thing twice you get exactly the same page of results and selected snippets. That suggests there’s more determinism in the system than you’re giving it credit for.

mmaunder · 3 months ago
It has a big impact on performance to do determinism. Which leaves using another model to essentially IQ test their models with reporting and alerting.