Readit News logoReadit News
FrasiertheLion · 10 days ago
AI has normalized single 9's of availability, even for non-AI companies such as Github that have to rapidly adapt to AI aided scaleups in patterns of use. Understandably, because GPU capacity is pre-allocated months to years in advance, in large discrete chunks to either inference or training, with a modest buffer that exists mainly so you can cannibalize experimental research jobs during spikes. It's just not financially viable to have spades of reserve capacity. These days in particular when supply chains are already under great strain and we're starting to be bottlenecked on chip production. And if they got around it by serving a quantized or otherwise ablated model (a common strategy in some instances), all the new people would be disappointed and it would damage trust.

Less 9's are a reasonable tradeoff for the ability to ship AI to everyone I suppose. That's one way to prove the technology isn't reliable enough to be shipped into autonomous kill chains just yet lol.

TacticalCoder · 10 days ago
> AI has normalized single 9's of availability, ...

FWIW I use AI daily to help me code...

And apparently the output of LLMs are normalizing single 9's too: which may or may not be sufficient.

From all the security SNAFUs, performance issues, gigantic amount of kitchen-skinky boilerplate generated (which shall require maintenance and this has always been the killer) and now uptime issues this makes me realize we all need to use more of our brains, not less, to use these AI tools. And that's not even counting the times when the generated code simply doesn't do what it should.

For a start if you don't know jack shit about infra, it looks like you're already in for a whole world of hurt: when that agent is going to rm -rf your entire Git repo and FUBAR your OS because you had no idea how to compartmentalize it, you'll feel bad. Same once all your secrets are going to publicly exposed.

It looks like now you won't just be needing strong basis about coding: you'll also be needing to be at ease with the entire stack. Learning to be a "prompt engineer" definitely sounds like it's the very easy part. Trivial even.

direwolf20 · 10 days ago
That's supposing the autonomous kill chain needs more than one 9. There are wars going on right now with less than 20% targeting accuracy.
mrbombastic · 10 days ago
we are going to do the same "everything is binary" engineer thing with bombs and innocent casualties we did with self driving? there is also an accountability crisis that will unfold if we loose these things on the world, it is not just one metric is better than human operators therefore take your hands off the wheel and hope for the best. Please file a ticket with support if your child's school was accidentally destroyed.
gaigalas · 10 days ago
"It's fine, everyone does it"
KronisLV · 10 days ago
There's probably a curve of diminishing returns when it comes to how much effort you throw in to improve uptime, which also directly affects the degree of overengineering around it.

I'm not saying that it should excuse straight up bad engineering practices, but I'd rather have them iterate on the core product (and maybe even make their Electron app more usable: not to have switching conversations take 2-4 seconds sometimes when those should be stored locally and also to have bare minimum such as some sort of an indicator when something is happening, instead of "Let me write a plan" and then there is nothing else indicating progress vs a silently dropped connection) than pursue near-perfect uptime.

Sorry about the usability rant, but my point is that I'd expect medical systems and planes to have amazing uptime, whereas most other things that have lower stakes I wouldn't be so demanding of. The context I've not mentioned so far is that I've seen whole systems get developed poorly, because they overengineered the architecture and crippled their ability to iterate, sometimes thinking they'd need scale when a simpler architecture, but a better developed one would have sufficed!

Ofc there's a difference between sometimes having to wait in a queue for a request to be serviced or having a few requests get dropped here and there and needing to retry them vs your system just having a cascading failure that it can't automatically recover from and that brings it down for hours. Having not enough cards feels like it should result in the former, not the latter.

Dead Comment

thekid314 · 10 days ago
Yeah, the influx of people is disrupting my work, but it brings me joy to witness OpenAI’s decline in consumer support. So much for their Jonny Ive product, whatever it was.
camillomiller · 10 days ago
I am so baffled that someone with the stature of Jony Ive fell prey to scam Altman empty promises. I would have expected much more of him.
skywhopper · 10 days ago
Seriously? Jony Ive is in his Cash In era. He long ago stopped being relevant, and was a huge drag on Apple for a decade. He’s perfectly happy to take billions for doing nothing, I’m sure.
rhubarbtree · 10 days ago
What were the empty promises?
chihuahua · 10 days ago
Altman put all of his attribute points on lying.
adithyassekhar · 10 days ago
Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.

Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.

raincole · 10 days ago
> Your next interview won't be testing your AI skills.

You are living under quite a big rock.

loevborg · 10 days ago
Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.
bakugo · 10 days ago
If you can only code with AI, soon you won't have interviews at all because there's no reason to hire you, as the managers can just type the prompts themselves. Or at least that's what I've been led to believe by the marketing.
brunooliv · 10 days ago
What rock?

C'mon let's be real here, there's either "testing AI skills" versus "using AI agents like you would on the daily".

The signal got from leetcode is already dubious to assert profeciency and it's mostly used as a filter for "Are you willing to cram useless knowledge and write code under pressure to get the job?" just like system design is. You won't be doing any system design for "scale" anywhere in any big tech because you have architects for that nor do you need to "know" anything, it's mostly gatekeeping but the truth is, LLMs democratized both leetcode and system design anyway. Anyone with the right prompting skills can now get to an output that's good for 99% of the cases and the other 1% are reserved for architecs/staff engineers to "design" for you.

The crux of the matter is, companies do not want to shift how they approach interviews for the new era because we have collectively believed that the current process is good enough as-is. Again, I'd argue this is questionable given how sometimes these services break with every new product launch or "under load" (where YO SYSTEM DESIGN SKILLZ AT).

adithyassekhar · 10 days ago
I wish I could edit that; Read: ..AI skills alone.
vidarh · 10 days ago
If you're not learning anything new, you're doing it wrong.

There's a massive gap between just using an LLM and using it optimally, e.g. with a proper harness, customised to your workflows, with sub-agents etc.

It's a different skill-set, and if you're going to go into another job that requires manual coding without any AI tools, by all means, then you need to focus on keeping those skills sharp.

Meanwhile, my last interview already did test my AI skills.

polairscience · 10 days ago
Have any descriptions or analysis of what is considered "properly" on the cutting edge? I'm very curious. Only part of my profession is coding. But it would be nice to get insight into how people who really try to learn with these tools work.
gck1 · 10 days ago
> Meanwhile, my last interview already did test my AI skills.

Curious to hear more about this.

thepasch · 10 days ago
> But you are forgetting your skills

Depends on what you consider your "skills". You can always relearn syntax, but you're certainly not going to forget your experience building architectures and developing a maintainable codebase. LLMs only do the what for you, not the why (or you're using it wrong).

adithyassekhar · 10 days ago
There are three sides to this depending on when you started working in this field.

For the people who started before the LLM craze, they won't lose their skills if they are just focusing on their original roles. The truth is people are being assigned more than their original roles in most companies. Backend developers being tasked with frontend, devops, qa roles and then letting go of the others. This is happening right now. https://www.reddit.com/r/developersIndia/comments/1rinv3z/ju... When this happens, they don't care or have the mental capacity to care about a codebase in a language they never worked before. People here talk about guiding the llms, but at most places they are too exhausted to carry that context and let claude review it's own code.

For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book. They have to blindly trust the code and keep hitting it like a casino machine (forgot the name, excuse me) burning tokens which makes these companies more money.

For the people who are yet to begin, sorry for having to start in a world where a few companies hold everyone's skills hostage.

the_bigfatpanda · 10 days ago
The syntax argument is correct, but from what I am seeing, people _are_ using it wrong, i.e. they have started offloading most of their problem solving to be LLM first, not just using it to maybe refine their ideas, but starting there.

That is a very real concern, I've had to chase engineers to ensure that they are not blindly accepting everything that the LLM is saying, encouraging them to first form some sense of what the solution could be and then use the LLM to refine it further.

As more and more thinking is offloaded to LLMs, people lose their gut instinct about how their systems are designed.

AlexeyBelov · 10 days ago
> Your next interview won't be testing your AI skills

Not that I disagree with your overall point, but have you interviewed recently? 90% of companies I interacted with required (!) AI skills, and me telling them how exactly I "leverage" it to increase my productivity.

adithyassekhar · 10 days ago
Are they just looking for AI skills? If so that's terrifying.
PacificSpecific · 10 days ago
I've done a couple flirty interviews and so far it hasn't come up. So take hope, it's not all bad.
nDRDY · 10 days ago
What a time to be alive. I once got roasted in an interview because I said I would use Google if I didn't know something (in this context, the answer to a question that would easily be found in language and compiler documentation).
skeledrew · 10 days ago
> not learning anything new

Huge disagree. Or likely more "depends on how you use it". I've learned a lot since I started using AI to help me with my projects, as I prompt it in such a way that if I'm going about something the "wrong" way, it'll tell me and suggest a better approach. Or just generally help me fill out my knowledge whenever I'm vague in my planning.

gck1 · 10 days ago
> But you are forgetting your skills (seen it first hand), and you're not learning anything new.

This is just false. I may forget how to write code by hand, but I'm playing with things I never imagined I would have time and ability to, and getting engineering experience that 15 years of hands on engineering couldn't give me.

> Your next interview won't be testing your AI skills.

Which will be a very good signal to me that it's not a good match. If my next interview is leetcode-style, I will fail catastrophically, but then again, I no longer have any desire to be a code writer - AI does it better than me. I want to be a problem solver.

adithyassekhar · 10 days ago
> getting engineering experience that 15 years of hands on engineering couldn't give me.

This is the equivalent of how watching someone climb mountain everest in a tv show or youtube makes you feel like you did it too. You never did, your brain got the feeling that you did and it'll never motivate you to do it yourself.

Dead Comment

mihaaly · 10 days ago
> Professionally you are downgrading

It is the contrary!

You learn using a very powerfool tool. This is a tool, like text editor and compiler.

But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.

The analogy from construction is to be elevated from being a bricklayer to an engineer. Or using various shaped shovels with wheelbarrel versus mechanized tools like excavators and dumpers in making earthworks.

... of course for those the focus is in being the master of bricklayers, which is noble, no pun intended, saying with agreeing straight face, bricklaying is a fine skill with beautiful outputs in their area of use. For those AI is really unnecessary. An existential threat, but unnecessary.

adithyassekhar · 10 days ago
I agree with you, syntax details are not important but they haven't been important for a long time due to better editors and linters.

> But you focus on the logic and function more instead of syntax details and whims of the computer languages used in concert.

This is exactly my point. I learned logical mistakes when my first if else broke. Only reason you or I can guide these into good logic is because we dealt with bad ones before all this. I use claude myself a lot because it saves me time. But we're building a culture where no one ever reads the code, instead we're building black boxes.

Again you could see it as the next step in abstraction but not when everyone's this dependent on a few companies prepared to strip the world of its skills so they can sell it back to them.

upmind · 10 days ago
Jarred (from Bun) said that a lot of the errors are being of how much they've scaled in users recently (i.e., the flock that came from OpenAI)
andreagrandi · 10 days ago
I must have missed something: why are people moving from OpenAI? Since they released gpt-5.3-codex I'be been using it and claude with opus-4.6 and Codex has always been better, more accurate, less prone to allucinations. I can do more with a 20$ OpenAI pland than with a Claude Max 100
andkenneth · 10 days ago
People are mad at openAI cooperating with the pentagon while anthropic put their foot down over their red lines.
ternwer · 10 days ago
HN often avoids politics, but they were some of the most upvoted stories recently:

https://news.ycombinator.com/item?id=47188697

https://news.ycombinator.com/item?id=47189650

rhubarbtree · 10 days ago
I use both openai and Claude and in the last few months have moved exclusively to Claude, as it’s better.
CSMastermind · 10 days ago
Politics, agreed Codex performs significantly better for me.
fred_is_fred · 10 days ago
The first scaling event was after their highly successful Super Bowl ad and the second was being on the right side of history over the weekend.
dilyevsky · 10 days ago
this has been an issue for years at this point... other labs are hardly any better tho
anonnona8878 · 10 days ago
keeps going down. One more time and I'm moving to Codex. Or hell, I better go back to using my actual brain and coding, god forbid. Fml.
lambda · 10 days ago
Please relearn to use your brain.

I cannot imagine how you can properly supervise an LLM agent if you can't effectively do the work yourself, maybe slightly slower. If the agent is going a significant amount faster than you could do it, you're probably not actually supervising it, and all kinds of weird crap could sneak in.

Like, I can see how it can be a bit quicker for generating some boilerplate, or iterating on some uninteresting API weirdness that's tedious to do by hand. But if you're fundamentally going so much faster with the agent than by hand, you're not properly supervising it.

So yeah, just go back to coding by hand. You should be doing tha probably ~20% of the time anyhow just to keep in practice.

winwang · 10 days ago
Kind of agreed. I like vibe coding as "just" another tool. It's nice to review code in IDE (well, VSCode), make changes without fully refactoring, and have the AI "autocomplete". Interesting, sometimes way faster + easier to refactor by hand because of IDE tooling.

The ways that agents actually make me "faster" are typically: 1. more fun to slog through tedious/annoying parts 2. fast code review iterations 3. parallel agents

tvink · 10 days ago
You'll be back :)
evara-ai · 10 days ago
This is a real operational problem when you're building client-facing automation systems on top of these APIs. I build chatbots, workflow automation, and AI agent systems for clients — and the hardest conversation is explaining that your system's uptime is fundamentally capped by your LLM provider's uptime.

Patterns that have helped in production:

1. Multi-provider fallback. For conversational systems, route to Claude by default, fall back to GPT-4 on 5xx errors. The response quality difference is usually acceptable for the 2-3% of requests that hit the fallback. This turns a hard outage into a slight quality degradation.

2. Async queuing for non-real-time workflows. If you're processing documents, generating reports, or running batch analysis — don't call the API synchronously. Queue the work, retry with exponential backoff, and let the system self-heal when the API recovers. Most of our automation pipelines run with a 15-minute SLA, not a 500ms one.

3. Graceful degradation in real-time systems. For chatbots and voice agents, have a scripted fallback path. "I'm having trouble processing that right now — let me transfer you to a human" is infinitely better than a hung connection or error message.

The broader issue: we're all building on infrastructure where "four nines" isn't even on the roadmap yet. That's fine if you architect for it — treat LLM APIs like any other unreliable external dependency, not like a database query.

pmontra · 10 days ago
Emails with verification codes do not get delivered.

> Have a verification code instead?

> Enter the code generated from the link sent to [...]

> We are experiencing delivery issues with some email providers and are working to resolve this.

> Check your junk/spam and quarantine folders and ensure that support@mail.anthropic.com is on your allowed senders list.

I'm still waiting for a code from one hour ago. Meanwhile I managed to fix my source code alone, like twelve months ago.

ruszki · 10 days ago
> I managed to fix my source code alone, like twelve months ago.

I’ve just mentioned to one of my friend yesterday, that you cannot do this anymore properly with new things. I’ve started a new project with some few years old Android libraries, and if I encounter a problem, then there is a high chance that there is nothing about it on the public internet anymore. And yesterday I suffered greatly because of this. I tried to fix a problem, I had a clearly suboptimal solution from myself after several hours, but I hated it, but I couldn’t find any good information about it (multi library AndroidManifest merging in case of instrumented tests). Then I hit Claude Code with a clear example where it fails. It solved it, perfectly. Then I asked in a separate session how this merging works, and why its own solution works. It answered well, then I asked for sources, and it cannot provide me anything. I tried Google and Kagi, and I couldn’t find anything. Even after I knew the solution. The information existed only hidden from the public (or rather deep in AGP’s source code), in the LLM. And I’m quite sure that I wasn’t the only one who had this problem before, yet there is no proper example to solve this on the internet at all, or even anything to suggests how the merging works. The existing information is about a completely separate procedure without instrumented tests.

So, you cannot be sure anymore, that you can solve it by yourself. Because people don’t share that much anymore. Just look at StackOverflow.

mejutoco · 10 days ago
It looks like you could write that blogpost and get some traffic, on the other side. Very interesting how the flow has changed direction based on your example.
johndough · 10 days ago
Anthropic has never been able to send emails to my outlook email address (since 2023). Maybe changing your email address helps.
freely0085 · 6 days ago
They don't allow changing your account email address.
freely0085 · 7 days ago
Facing the issue too for the last two days.
tayo42 · 10 days ago
Who fixes the Ai when the Ai is down? Semi serious since they're pretty big on not writing code?
kube-system · 10 days ago
The same guy who used to fix stack overflow, presumably
zvqcMMV6Zcr · 10 days ago
Maybe network guys can give some hints? I guess they encounter such issue relatively often, when they can't access network equipment by network to fix the network issue. I know management consoles have separate networks on datacenter scale but it isn't that easy with even bigger networks.
raincole · 10 days ago
I know you say "semi serious" but you can't seriously think there isn't an LLM for internal usage only in Anthropic, right.
tayo42 · 10 days ago
I'm not sure what's involved with serving these llms or if the infra could be completely seperate or not for an internal one.
brookst · 10 days ago
Most ops fixes don’t involve writing code though.