Readit News logoReadit News
cobolcomesback · 6 days ago
This “mandatory meeting” is just the usual weekly company-wide meeting where recent operational issues are discussed. There was a big operational issue last week, so of course this week will have more attendance and discussion.

This meeting happens literally every week, and has for years. Feels like the media is making a mountain out of a mole hill here.

davidclark · 6 days ago
The article claims:

>He asked staff to attend the meeting, which is normally optional.

Is that false? It also discusses a new policy:

>Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.

Is that inaccurate? It is good context that this is a regularly scheduled meeting. But, regularly scheduled meetings can have newsworthy things happen at them.

djb_hackernews · 5 days ago
When an SVP asks you to do something in a mass email, it's very much optional. Dave Treadwell is an SVP, his org is likely in the 10's of thousands, there is no way to even have a mandatory meeting for that many people.

My SVP asks me to do things all the time, indirectly. I do probably 5% of them.

skeeter2020 · 6 days ago
That's not really what the headline attempts to communicate though. It specifically emphasizes "Mandatory" and "AI breaking things". Nobody was going to click on "Regularly scheduled Amazon staff meeting will include discussion on operational improvement"
the_arun · 5 days ago
Days are not far, where my agents are going to attend meetings & share my opinions, collect summary for me. If everyone do same - agents run meetings & share summary with parent (humans). Each of us have LLMs/Agents with our contextual data. It is another level of multi tasking.
s3p · 5 days ago
>>He asked staff to attend the meeting, which is normally optional. >Is that false?

Judging from the comment above, no, the meeting happens every week, and this week they were asked to attend.

cobolcomesback · 6 days ago
It’s not false. But it’s also weaselly worded.

Note that the article doesn’t say that he told staff they have to attend the meeting. It says he “asked” staff to attend the meeting. Which again, it’s really really normal for there to be an encouragement of “hey, since we just had an operational event, it would be good to prioritize attending this meeting where we discuss how to avoid operational events”.

As for the second quote: senior engineers have always been required to sign off on changes from junior engineers. There’s nothing new there. And there is nothing specific to AI that was announced.

This entire meeting and message is basically just saying “hey we’ve been getting a little sloppy at following our operational best practices, this is a reminder to be less sloppy”. It’s a massive nothingburger.

CoolGuySteve · 6 days ago
It didn't seem to make the news but at least in NYC the entire Amazon storefront was broken all afternoon on Friday.

Items weren't displaying prices and it was impossible to add anything to your cart. It lasted from about 2pm to 5pm.

It's especially strange because if a computer glitch brought down a large retail competitor like Walmart I probably would have seen something even though their sales volume is lower.

kotaKat · 6 days ago
A little birdie told me someone pushed duplicate data into one of Amazon’s core noSQL systems that runs most of e-commerce. The front end of the site broke in weird ways but it certainly wasn’t taking orders.
malfist · 6 days ago
Over the weekend I was trying to return a pair of shoes and get a different size and I kept getting 500s trying to go to the store page for the shoes.
m3047 · 6 days ago
Sometimes you squeeze clay and it comes out the oddest places. There were other stressors last week.https://www.pcmag.com/news/amazon-cloud-services-disrupted-i...
belval · 6 days ago
I am not in that specific meeting but it made me chuckle that a weekly ops meeting will somehow get media attention. It's been an Amazon thing forever. Wait until the public learns about CoEs!
cmiles74 · 5 days ago
A weekly ops meeting where they talk about ensuring PRs with AI contributions get extra scrutiny? I think that's significant news.
8note · 6 days ago
id.expect COEs to be coming up with AI code action items though, not to have more thorough human checks
groundzeros2015 · 5 days ago
It’s always sobering to see a news story about something you have insider perspective on.
otterley · 6 days ago
> Feels like the media is making a mountain out of a mole hill here.

That's been their job ever since cable news was invented.

ses1984 · 6 days ago
It’s been a bit longer than that.

https://en.wikipedia.org/wiki/Yellow_journalism

It probably goes back as long as they have been shouting news in the town square in Rome or before that even.

embedding-shape · 6 days ago
> This meeting happens literally every week, and has for years. Feels like the media is making a mountain out of a mole hill here.

Are you completely missing the point of the submission? It's not about "Amazon has a mandatory weekly meeting" but about the contents of that specific meeting, about AI-assisted tooling leading to "trends of incidents", having a "large blast radius" and "best practices and safeguards are not yet fully established".

No one cares how often the meeting in general is held, or if it's mandatory or not.

skeeter2020 · 6 days ago
>> Are you completely missing the point of the submission

no, and that's what people are noting: the headline deliberately tries to blow this up into a big deal. When did you last see the HN post about Amazon's mandatory meeting to discuss a human-caused outage, or a post mortem? It's not because they don't happen...

furyofantares · 5 days ago
This reply chain is confusing but I'm guessing got merged from another thread that had a different title?

Must have as the comments are hours older than OP.

Deleted Comment

cmiles8 · 6 days ago
The core message of the article is that Amazon has been having issues with AI slop causing operational reliability concerns, and that seems to be 100% accurate.
coredog64 · 5 days ago
/with AI slop//

Dead Comment

rahbert · 5 days ago
This is correct. We ran them on Wednesday’s in Alexa. Jessy actually used to come and sit in ours once a quarter or so when he was running AWS.
age1mlclg6 · 5 days ago
What has really happened is that those employees were made into "reverse centaurs":

https://www.theguardian.com/us-news/ng-interactive/2026/jan/...

Clent · 6 days ago
Who is the media you're accusing here? This is a twitter post. As far as I can tell they do not work a media company.

What is worth being pointed out is how quickly people blame "The Media" for how people use, consume and spread information on social networks.

otterley · 6 days ago
The source is not a Twitter post, it's a Financial Times article (that the poster failed to cite).

Deleted Comment

niwtsol · 6 days ago
I believe it is by group - AWS started the weekly operations meeting, effectively every service's oncall from the last week had to attend. Then it grew massive, so they made it optional. Alexa had a similar meeting that tried to replicate what AWS did. A lot of time spent reviewing load tests getting ready for holiday season, prime day, and the superbowl (super bowl ads used to cause crazy TPS spikes for Alexa). And a lot of finger pointing if there was an outage from one team. While it probably did help raise the operational bar, so much time wasted by engineers on busywork/paperwork documenting an error or fix vs improving the actual service.
happytoexplain · 6 days ago
>Junior and mid-level engineers can no longer push AI-assisted code without a senior signing off

Review by a senior is one of the biggest "silver bullet" illusions managers suffer from. For a person (senior or otherwise) to examine code or configuration with the granularity required to verify that it even approximates the result of their own level of experience, even only in terms of security/stability/correctness, requires an amount of time approaching the time spent if they had just done it themselves.

I.e. senior review is valuable, but it does not make bad code good.

This is one major facet of probably the single biggest problem of the last couple decades in system management: The misunderstanding by management that making something idiot proof means you can now hire idiots (not intended as an insult, just using the terminology of the phrase "idiot proof").

ardeaver · 6 days ago
When I was really early in my career, a mentor told me that code review is not about catching bugs but spreading context (i.e. increasing bus factor.) Catching bugs is a side effect, but unless you have a lot of people review each pull request, it's basically just gambling.

The more expensive and less sexy option is to actually make testing easier (both programmatically and manually), write more tests and more levels of tests, and spend time reducing code complexity. The problem, I think, is people don't get promoted for preventing issues.

VorpalWay · 5 days ago
This depends on the industry. I work on industrial machine control software, and we spend a huge amount of time on tests. We have to for some parts (human safety crtitical), but other parts would just be expensive if they failed (loss of income for customers, and possibly damaged equipment).

The key to making this scalable is to make as few parts as possible critical, and make the potential bad outcomes as benign as possible. (This lets you go to a lower rating in whatever safety standard applies to your industry.) You still need tests for the less critical parts though, while downtime is better than injury, if you want to sell future machines to your customers you need to have a good track record. At least if you don't want to compete on cost.

asdfman123 · 5 days ago
One of the major things code review does is prevent that one guy on your team who is sloppy or incompetent from messing up the codebase without singling him out.

If you told someone "I don't trust you, run all code by me first" it wouldn't go well. If you tell them "everyone's code gets reviewed" they're ok with it.

bluGill · 6 days ago
> people don't get promoted for preventing issues.

they do - but only after a company has been burned hard. They also can be promoted for their area being enough better that everyone notices.

still the best way to a promotion is write a major bug that you can come in at the last moment and be the hero for fixing.

kqr · 5 days ago
Code review is great for spreading context, but they also are very good at finding bugs. If you want to find bugs, review is one of the best ways to do it. https://entropicthoughts.com/code-reviews-do-find-bugs
bloppe · 5 days ago
I think of code review more about ensuring understandability. When you spend hours gathering context, designing, iterating, debugging, and finally polishing a commit, your ability to judge the readability of your own change has been tainted by your intimate familiarity with it. Getting a fresh pair of eyes to read it and leave comments like "why did you do it this way" or "please refactor to use XYZ for maintainability", you end up with something more that will be easier to navigate and maintain by the junior interns who will end up fixing your latent bugs 5 years later.
8note · 6 days ago
> The problem, I think, is people don't get promoted for preventing issues.

cleaning up structural issues across a couple orgs is a senior => principal promo ive seen a couple of times

wiseowise · 5 days ago
> When I was really early in my career, a mentor told me that code review is not about catching bugs but spreading context (i.e. increasing bus factor.) Catching bugs is a side effect

This bs is what I say my juniors when I want them to fuck off with their reviews and focus on my actual work.

Sounds very insightful though.

marginalia_nu · 6 days ago
Expert reviews are just about the only thing that makes AI generated code viable, though doing them after the fact is a bit sketchy, to be efficient you kinda need to keep an eye on what the model is doing as its working.

Unchecked, AI models output code that is as buggy as it is inefficient. In smaller green field contexts, it's not so bad, but in a large code base, it's performs much worse as it will not have access to the bigger picture.

In my experience, you should be spending something like 5-15X the time the model takes to implement a feature on reviewing and making it fix its errors and inefficiencies. If you do that (with an expert's eye), the changes will usually have a high quality and will be correct and good.

If you do not do that due dilligence, the model will produce a staggering amount of low quality code, at a rate that is probably something like 100x what a human could output in a similar timespan. Unchecked, it's like having a small army of the most eager junior devs you can find going completely fucking ape in the codebase.

locusofself · 6 days ago
If you spend 5-15x the time reviewing what the LLM is doing, are you saving any time by using it?
rectang · 6 days ago
> Expert reviews are just about the only thing that makes AI generated code viable

I disagree, in the sense that an engineer who knows how to work with LLMs can produce code which only needs light review.

* Work in small increments

* Explicitly instruct the LLM to make minimal changes

* Think through possible failure modes

* Build in error-checking and validation for those failure modes

* Write tests which exercise all paths

This is a means to produce "viable" code using an LLM without close review. However, to your point, engineers able to execute this plan are likely to be pretty experienced, so it may not be economically viable.

UncleMeat · 5 days ago
Sadly, the way people become expert in a codebase is through coding. The process of coding is the process of learning. If we offload the coding to AI tools we will never be as expert in the codebase, its complexity, its sharp corners, or its unusual requirements. While you can apply general best practices for a code review you can never do as much as if you really got your hands dirty first.

"Seniors will do expert review" will slowly collapse.

jonnycoder · 6 days ago
I tend to agree. I spent a lot of time revising skills for my brownfield repo, writing better prompts to create a plan with clear requirements, writing a skill/command to decompose a plan, having a clear testing skill to write tests and validate, and finally having a code reviewer step using a different model (in my case it's codex since claude did the development). My last PR was as close to perfect as I have got so far.
Skidaddle · 6 days ago
Just lead with “You are an expert software engineer…”, easy!
raw_anon_1111 · 6 days ago
In my experience, inefficient code is rarely the issue outside of data engineering type ETL jobs. It’s mostly architectural. Inefficient code isn’t the reason your login is taking 30 seconds. Yes I know at Amazon/AWS scale (former employee) every efficiency matters. But even at Salesforce scale, ringing out every bit of efficiency doesn’t matter.

No one cares about handcrafted artisanal code as long as it meets both functional and non functional requirements. The minute geeks get over themselves thinking they are some type of artists, the happier they will be.

I’ve had a job that requires coding for 30 years and before ther I was hobbyist and I’ve worked for from everything from 60 person startups to BigTech.

For my last two projects (consulting) and my current project, while I led the project, got the requirements, designed the architecture from an empty AWS account (yes using IAC) and delivered it. I didn’t look at a line of code. I verified the functional and non functional requirements, wrote the hand off documentation etc.

The customer is happy, my company is happy, and I bet you not a single person will ever look at a line of code I wrote. If they do get a developer to take it over, the developer will be grateful for my detailed AGENTS.md file.

js8 · 6 days ago
> requires an amount of time approaching the time spent if they had just done it themselves

It's actually often harder to fix something sloppy than to write it from scratch. To fix it, you need to hold in your head both the original, the new solution, and calculate the difference, which can be very confusing. The original solution can also anchor your thinking to some approach to the problem, which you wouldn't have if you solve it from scratch.

bluGill · 6 days ago
Sloppy code that has been around for a while works. It likely has support for edge cases you forgot about. Often the sloppyness is because of those edge cases.
ummonk · 5 days ago
In fairness though, it does give you good practice for the essential skill of maintaining / improving an old codebase.
unshavedyak · 5 days ago
> For a person (senior or otherwise) to examine code or configuration with the granularity required to verify that it even approximates the result of their own level of experience, even only in terms of security/stability/correctness, requires an amount of time approaching the time spent if they had just done it themselves.

Hell, often it feels slower/worse. Foreign code is easily confusing at first, which slows you down - and bad code quickly gets bewildering and sends you down paths of clarifications that waste time.

SchemaLoad · 5 days ago
So many times I get AI generated PRs from juniors where I don't feel comfortable with the code, I wouldn't do it like this myself, but I can't strictly find faults that I can reject the PR with. Usually it's just a massive amount of code being generated which is extremely difficult to review, much harder than it was for the submitter to generate and send it for review.

Then often it blows up in production. Makes me almost want to blanket reject PRs for being too difficult to understand. Hand written code almost has an aversion to complexity, you'd search around for existing examples, libraries, reusable components, or just a simpler idea before building something crazy complex. While with AI you can spit out your first idea quickly no matter how complex or flawed the original concept was.

steveBK123 · 6 days ago
Right, code reviews should already have been happening with human written junior code.

If AI is a productivity boost and juniors are going to generate 10x the PRs, do you need 10x the seniors (expensive) or 1/10th the juniors (cost save).

A reminder that in many situations, pure code velocity was never the limiting factor.

Re: idiot prooofing I think this is a natural evolution as companies get larger they try to limit their downside & manage for the median rather than having a growth mindset in hiring/firing/performance.

AgentOrange1234 · 6 days ago
Seniors are going to need to hold Juniors to a high bar for understanding and explaining what they are committing. Otherwise it will become totally soul destroying to have a bunch of juniors submitting piles of nonsense and claiming they are blocked on you all the time.

Deleted Comment

sethops1 · 6 days ago
This was challenging enough pre AI. Now that everybody has an AI slop button, the life of an effective code reviewer just got so much more miserable.
esafak · 5 days ago
Make them first go through an AI reviewer that is informed by the code base's standards.
jetrink · 6 days ago
It could create the right sort of incentives though. If I'm a junior and I suddenly have to take my work to a senior every time I use AI, I'm going to be much more selective about how I use it and much more careful when I do use it. AI is dangerous because it is so frictionless and this is a way to add friction.

Maybe I don't have the correct mental model for how the typical junior engineer thinks though. I never wanted to bug senior people and make demands on their time if I could help it.

devonbleak · 6 days ago
What you're actually going to see is seniors inundated by slop and burning out and quitting because what used to be enjoyable solving of problems has become wading through slop that took 10 minutes to generate and submit but 30+ minutes to understand and write up a critique for it.
onion2k · 6 days ago
I.e. senior review is valuable, but it does not make bad code good.

I suspect that isn't the goal.

Review by more senior people shifts accountability from the Junior to a Senior, and reframes the problem from "Oh dear, the junior broke everything because they didn't know any better" to "Ah, that Senior is underperforming because they approved code that broke everything."

bs7280 · 6 days ago
This is also why I think we will enter a world without Jr's. The time it takes for a Senior to review the Jr's AI code is more expensive than if the Sr produced their own AI code from scratch. Factor in the lack of meetings from a Sr only team, and the productivity gains will appear to be massive.

Whether or not these productivity gains are realized is another question, but spreadsheet based decision makers are going to try.

czscout · 6 days ago
In this scenario, how might one become a senior without first being a junior? Seniors just pop into existence?
hintymad · 5 days ago
> Review by a senior is one of the biggest "silver bullet" illusions managers suffer from

Especially in a big co like Amazon, most senior engineers are box drawers, meeting goers, gatekeepers, vision setters, org lubricants, VP's trustees, glorified product managers, and etc. They don't necessarily know more context than the more junior engineers, and they most likely will review slowly while uncovering fewer issues.

raw_anon_1111 · 6 days ago
Why only AI generated code? I wouldn’t let a junior or mid level developer’s code go into production without at least verifying the known hotspots - concurrency, security, database schema, and various other non functional requirements that only bite you in production.

I’m probably not going to review a random website built by someone except for usability, requirements and security.

happytoexplain · 6 days ago
I didn't restrict my opinion to genAI code. I'm expressing a general thought that was relevant before AI. AI is just salient in relation to it.

I also said senior review is valuable, but I'm not 100% sure if you're implying I didn't.

OrangeDelonge · 5 days ago
I’ve seen hundreds of PR’s produced by a junior and reviewed by a mid lvl go into prod. I don’t see any problem with that
belval · 6 days ago
The unwritten thing is that if you need seniors to review every single change from junior and mid-level engineers, and those engineers are mostly using Kiro to write their CRs, then what stops the senior from just writing the CRs with Kiro themselves?
qnleigh · 6 days ago
I seriously doubt that they think senior reviewers will meticulously hunt down and fix all the AI bugs. Even if they could, they surely don't have the time. But it offers other benefits here:

1. They can assess whether the use of AI is appropriate without looking in detail. E.g. if the AI changed 1000 lines of code to fix a minor bug, or changed code that is essential for security.

2. To discourage AI use, because of the added friction.

kaffekaka · 5 days ago
Point 1 is important. Seniors (or any developer really) with experience in the code base in question can judge pretty quickly if a CR seems reasonable.
zamalek · 5 days ago
> Review by a senior is one of the biggest "silver bullet" illusions managers suffer from.

My manager has been urging us to truly vibe code, just yesterday saying that "language is irrelevant because we've reached the point where it works - so you don't need to see it." This article is a godsend; I'll take this flawed silver bullet any day of the week.

mrothroc · 6 days ago
Senior review can definitely help, regardless if the code comes from a junior or an LLM. We've done this since the dawn of time. However, it doesn't scale and since LLM volume far exceeds what juniors can do, you end up overwhelming the seniors, who are normally overbooked anyway.

The other problem is that the type of errors LLMs make are different than juniors. There are huge sections of genuinely good code. So the senior gets "review fatigue" because so much looks good they just start rubber stamping.

I use an automated pipeline to generate code (including terraform, risking infrastructure nukes), and I am the senior reviewer. But I have gates that do a whole range of checks, both deterministic and stochastic, before it ever gets to me. Easy things are pushed back to the LLM for it to autofix. I only see things where my eyes can actually make a difference.

Amazon's instinct is right (add a gate), but the implementation is wrong (make it human). Automated checks first, humans for what's left.

yifanl · 6 days ago
Senior reviews are useful, but as I understand it, Amazon has a fairly high turnover rate, so I wonder just how many seniors with deep knowledge of the codebase they could possibly have.
tartoran · 6 days ago
From engineers are interchangeable to high turnover are decisions that the company took. The payback time always comes at some point.
grvdrm · 6 days ago
What a statement at the end. You are absolutely right.

I hear “x tool doesn’t really work well” and then I immediately ask: “does someone know how to use it well?” The answer “yes” is infrequent. Even a yes is often a maybe.

The problem is pervasive in my world (insurance). Number-producing features need to work in a UX and product sense but also produce the right numbers, and within range of expectations. Just checking the UX does what it’s supposed to do is one job, and checking the numbers an entirely separate task.

I don’t many folks that do both well.

hnthrow0287345 · 6 days ago
>requires an amount of time approaching the time spent if they had just done it themselves.

I would actually say having at least 2 people on any given work item should probably be the norm at Amazon's size if you also want to churn through people as Amazon does and also want quality.

Doing code reviews are not as highly valued in terms of incentives to the employees and it blocks them working on things they would get more compensation for.

strogonoff · 5 days ago
I would argue that the amount of time needed for a proper review exceeds the amount of time needed to just do it yourself.

When reviewing, you need to go through every step of implementing it yourself (understand the problem, solve the problem, etc.), but you additionally need to 1) understand someone else’s solution and 2) diff your solution against theirs to provide meaningful feedback.

Review could take roughly equivalent time, but only if I am allowed to reject/approve in a binary way (“my solution would not be the same, therefore denied”) which is not considered appropriate in most places.

This is why I am not a fan of being the reviewer.

lokar · 6 days ago
The goal of Sr code review is not to make the code better, it's to make the author better.
skeeter2020 · 6 days ago
Agree but even broader: authors. I always viewed reviews as targeting Brook's less famous findings about the optimal team size being one, and asking how can we get better at building systems too big for the individual. I think code review is about shared, consistent understanding with catching bugs a nice side effect (or justification for the bean counters).
sumeno · 5 days ago
That's not going to work when the author is an LLM
radiator · 6 days ago
Deming's point 3 (of 14): Cease dependence on inspection to achieve quality. Eliminate the need for massive inspection by building quality into the product in the first place.
mrbonner · 6 days ago
What stops the senior from using AI to review the AI generated code the junior published?
tartoran · 6 days ago
That’s something that the junior can do. What companies want to do is put responsibility on someone who has more knowledge and skin in the game
lionkor · 5 days ago
> [...] requires an amount of time approaching the time spent if they had just done it themselves.

Yes, but with the caveat that the junior learns and eventually can become the senior.

femiagbabiaka · 6 days ago
the outcome of the review isn't just that the code gets shipped, it's knowledge transfer from the senior engineer to the junior engineers that then creates more senior engineers
rafaelmn · 5 days ago
Eventually they'll get rid of juniors and mid level devs because realistically it's easier to review when you're the one doing the prompting.
remarkEon · 6 days ago
Other than “don’t hire idiots”, what is the solution to this problem? I agree with you, and this particular systems management issue is not constrained to software.
happytoexplain · 5 days ago
I don't know.

We need smart people at every layer. If leadership isn't in that category, it spreads to all layers.

I don't know how we defeat capitalism to incentivize smart leadership. It's fundamentally opposed to market forces.

rco8786 · 5 days ago
Going to systemically turn off your senior staff over time also. Most Senior Engineers aren't that interested in doing even more code review.
mmcconnell1618 · 5 days ago
Also, have massive layoffs every few months just to keep people on edge. AWS wants people to leave with RTO and badging policies, comp range shifts lower unless you have year over year ratings, and an obsessive push to force AI into every process. Top talent is leaving and will continue to leave AWS.
yalogin · 5 days ago
Don’t forget that this auto generated code will have subtle bugs and feels complete at the outset
munk-a · 5 days ago
Reviewing code changes (generally) takes more time than writing code changes for a pretty significant chunk of engineers. If we're optimizing slop code writing at the expense of more senior's time being funneled into detailed reviews then we're _doing it wrong_.
napolux · 6 days ago
LGTM
RamblingCTO · 6 days ago
Who said PR reviews need to solve all the things and result in proof against idiots?

So you're saying that peer reviews are a waste of time and only idiots would use/propose them?

happytoexplain · 6 days ago
None of that, sorry if I wasn't clear.

To partially clarify: "Idiot proof" is a broad concept that here refers specifically to abstraction layers, more or less (e.g. a UI framework is a little "idiot proof"; a WYSIWYG builder is more "idiot proof"). With AI, it's complicated, but bad leadership is over-interpreting the "idiot proof" aspects of it. It's a phrase, not an insult to users of these tools.

33MHz-i486 · 5 days ago
In case it isn’t completely obvious from this, it is indeed hellish to work there. Most of AWS has a 2 reviewer requirement. If AI is writing most of the code (and it is because most Amazon code is copypasta boilerplate) you need 3 developers to sign off to ship anything. But of course due to headcount attrition, managers have ~1.5 developers to a project. Meanwhile the L8 manager is doing nothing except stack ranking each level of engineers according to number of commits merged & customer facing features shipped, and firing 15% of the bottom at the end of each year. There is no notion of subject matter expertise or technical depth, theyre happy to replace whoever with fresh-grads (theyre all just cogs anyway right!). Between that and voluntary departures, teams having 80-100% turnover every 5 years is basically par.

Also while this is happening most developers are getting constantly hammered by operational issues and critical security tasks because 1) the legacy toolchain imports 6 different language package ecosystems and 2)no one ever pays down tech debt in legacy code until its a high severity ticket count in a KPI dashboard visible to the senior management.

mikert89 · 5 days ago
The thing is, this management philosophy worked when AWS knew what they needed to build and just needed to execute with top notch operations.

But now with AI, they are getting disrupted. Most AWS services might become obsolete, why does an ai need these janky higher levels abstractions AWS piles on.

So now they need innovation, but the company isn’t set up for it. They are forcing short deadlines for product launches that don’t matter

33MHz-i486 · 5 days ago
its not even AI. most of the cloud offerings are commodities now.

the marginal technological direction is determined by middle managers whoes primary motivation is “what new customer facing feature can I launch at this years re:invent and build a little empire” (of course this is a shrinking offering as tech debt and complexity pile up)

junior engineers are burned and churned on execution, seniors are project managers, principals just do high level reviews & high level fire fighting (note not actually leading the tech)

director and above just their spend time on “what to kill” or “who to fire” as priorities change every 6 months

prakhar897 · 6 days ago
From the amazon I know, people only care about a. not getting fired and b. promotions. For devs, the matrix looks like this:

1. Shipping: deliver tickets or be pipped.

2. Having Less comments on their PRs: for some drastically dumb reason, having a PR thoroughly reviewed is a sign of bad quality. L7 and above use this metric to Pip folks.

3. Docs: write docs, get them reviewed to show you're high level.

Without AI, an employee is worse off in all of the above compared to folks who will cheat to get ahead.

I can't see how "requesting" folks for forego their own self-preservation will work. especially when you've spent years pitting people against each other.

malfist · 6 days ago
Not only is having too many comments on your PRs bad for you, but so is not leaving comments on other people's PRs. Both are metrics used
dude250711 · 5 days ago
I'd leave lots of comments out of spite whenever I would feel my PRs had been treated unfairly. If I am going down, you all are coming with me.
embedding-shape · 5 days ago
> 2. Having Less comments on their PRs: for some drastically dumb reason, having a PR thoroughly reviewed

I'm very far away from liking Amazon's engineering culture and general work culture, but having PRs with countless of discussions and feedback on it does signal that you've done a lot of work without collaborating with others before doing the work. Generally in teams that work well together and build great software, the PRs tend to have very little on them, as most of the issues were resolved while designing together with others.

joeframbach · 5 days ago
I've been involved in so many CRs where I've given feedback over 10 revs, then the submitter cancels the CR and files a new one, for the metrics.
tom_ · 5 days ago
If the review tooling is any good, getting the code somewhere it can see it is a convenient way for people to give and receive feedback. As the saying goes, the system is what it does!

(And/but yes/no, I have never worked at NAGFAM...)

ex-aws-dude · 5 days ago
Eh I feel like there are some features where you just have to get in the weeds to even design it and the code review itself is part of the process of designing/figuring out the edge cases.
dboreham · 5 days ago
4. Don't work in the corporate equivalent of The Hunger Games.
999900000999 · 5 days ago
At least in the past the idea is you do the dance , vest and leave.

I missed my FAANG chance during the good years. No retirement for me!

rk06 · 5 days ago
if someone responded with a lot of PR comments, I would set up a meeting directly and avoid unnecessary discussion on PR.
sdevonoes · 6 days ago
Reviewing AI generated code at PR time is a bottleneck. It cancels most of the benefits senior leadership thinks AI offers (delivery speed).

There’s also this implicit imbalance engineers typically don’t like: it takes me 10 min to submit a complete feature thanks to Claude… but for the human reviewing my PR in a manual way it will take them 10-20 times that.

Edit: at the end real engineers know that what takes effort is a) to know what to build and why, b) to verify that what was built is correct. Currently AI doesn’t help much with any of these 2 points.

The inbetweens are needed but they are a byproduct. Senior leadership doesn’t know this, though.

hard24 · 6 days ago
Indeed. My view as a CEO is, if you are still reviewing the code yourself then what use is it that you can produce a bunch of text at a faster rate?

I'd prefer people wrote good quality code and checked it as they went along... whilst allowing room for other stuff they didn't think of to come to the front. The production process of using LLMs is entirely different, in its current state I don't see the net benefit.

E.g. if you have a very crystalised vision of what you want, why would I want an engineer to use an LLM to write it, when the LLM can't do both raw production and review? Could this change? Sure. But there's no benefit for me personally to shift toward working that way now - I'd rather it came into existence first before I expose myself to incremental risk that affects business operations. I want a comprehensive solution.

tech_tuna · 3 days ago
You should lay off your engineering team and do it all in Lovable amigo.
FromTheFirstIn · 4 days ago
Where are you CEO?
beardedetim · 6 days ago
This is what I don't understand about this policy. There's no way a senior has enough spare capacity to be the gate keeper on every PR made by AI below them. So now we are just making it so the senior people use more AI to keep up but now they're to blame for letting it happen.

It sounds like a piss poor deal for seniors unless senior engineer now means professional code reviewer.

malfist · 5 days ago
That's amazon in a nutshell though. Create conflicting metrics for performance, push credit up and responsibility down, punish everyone below you for not meeting the double standards
rhubarbtree · 5 days ago
Most AI advocates I know believe this period, reviewing every line of code, will come to an end when models improve. So there will be no bottleneck. We will simply test and ship, with AI doing all the code and review.
bandrami · 5 days ago
Possibly, but it doesn't make sense to restructure things in advance of that actually happening, particularly since there's no roadmap for getting there right now.
qnleigh · 6 days ago
Surely they know all this. They're worried about AI code degrading codebase quality, so they're putting on the brakes.
radiator · 6 days ago
> Senior leadership doesn’t know this, though.

Well, you'd think senior leadership should know how their business and their people work.

Barrin92 · 5 days ago
to be fair senior engineering leads in the software world are like Voltaire's joke about the holy roman empire, neither holy, roman or an empire.

Despite the name not a lot of seniority, leadership or engineering going around

asadotzler · 5 days ago
LOL
cmiles8 · 6 days ago
The optics here are really bad for Amazon. The continuing mass departures of long tenured folks, second-rate AI products, and a string of bad outages paints a picture that current leadership is overseeing a once respected engineering train flying off the tracks.

News from the inside makes it sound like things are getting pretty bad.

the_biot · 5 days ago
> The continuing mass departures of long tenured folks

You mean senior programmers that have been there for ages don't want to spend their time reviewing AI slop? Who'd a thunk it!

philip1209 · 5 days ago
I think the deeper need is a "self-review" flow.

People push AI-reviewed code like they wrote it. In the past, "wrote it" implies "reviewed it." With AI, that's no longer true.

I advocate for GitHub and other code review systems to add a "Require self-review" option, where people must attest that they reviewed and approved their own code. This change might seem symbolic, but it clearly sets workflows and expectations.

billbrown · 5 days ago
Yes, underthinking is rampant. Glancing at "AI" output is not reviewing code: you have to grok it (in the Heinlein sense) in order to treat it as your own.
userbinator · 5 days ago
You have to grok it, and not just Grok it.
Tyr42 · 5 days ago
Heck, doing a self review when you wrote the code catches stuff like forgetting debug prints.
nothrabannosir · 5 days ago
(tangent of the decade : prefixing your debug printfs with NOCOMMIT helps catching them before commit :) sample precommit hook and GitHub ci action I wrote is at https://github.com/nobssoftware/nocommit but it’s just a grep)
therealdrag0 · 5 days ago
Self review should also include adding guiding comments for other reviewers.
jeremyjh · 5 days ago
We have it in a checklist in PR template. I can’t imagine a fiat class feature that would be much more meaningful. It surprised me to learn there are developers who have to be reminded to review their own code and test it, but does seem to help.
kuekacang · 5 days ago
I've been lucky to discover git relatively late and sublime merge relatively soon. It seems like separating the concern of editing and reviewing code is making me consider each more as separate thing.

It also makes me more comfortable figuring out how a project's pull acceptance are like (maybe due to how fast local ui is compared to web-based git). On the other hand, I can only run some basic git cli commands and can't quickly comprehend raw text-based diff, especially when encountering some linux patches from time to time.

paxys · 5 days ago
If someone was confident enough to push through an AI change without even reading/reviewing it themselves adding more buttons to the UI isn't going to change anything.

Deleted Comment

cvak · 4 days ago
TBH, I do PRs on repos with no other devs just do do self-review, and I did that before AI.
8note · 5 days ago
the tooling doesnt make it easy currently.

working at amazon, when I wanted to review code myself through the CR tool, Id still end up publishing it to the whole team and have to add some title shenanigans saying it was a self review or WIP and for others to not look at it yet

koinedad · 5 days ago
Self review is #1

Deleted Comment

ritlo · 6 days ago
The only way to see the kinds of speed-up companies want from these things, right now, is to do way too little review. I think we're going to see a lot of failures in a lot of sectors where companies set goals for reduced hours on various things they do, based on what they expected from LLM speed-ups, and it will have turned out the only way to hit those goals was by spending way too little time reviewing LLM output.

They're torn between "we want to fire 80% of you" and "... but if we don't give up quality/reliability, LLMs only save a little time, not a ton, so we can only fire like 5% of you max".

(It's the same in writing, these things are only a huge speed-up if it's OK for the output to be low-quality, but good output using LLMs only saves a little time versus writing entirely by-hand—so far, anyway, of course these systems are changing by the day, but this specific limitation has remained true for about four years now, without much improvement)

SoftTalker · 6 days ago
So will it turn out that actually writing code was never the time sink in the first place?

That has always been my feeling. Once I really understand what I need to implement, the code is the easy part. Sure it takes some time, but it's not the majority. And for me, actually writing the code will often trigger some additional insight or awareness of edge cases that I hadn't considered.

hard24 · 6 days ago
"So will it turn out that actually writing code was never the time sink in the first place?"

Of course it wasn't! Do you think people can envision the right objects to produce all the time? Yeah.. we have a lot of Steve Jobs walking around lol.

As you say, there's 'other stuff' that happens naturally during the production process that add value.

8note · 5 days ago
At least with my experience at amazon it wasnt.

if i wanted, i could queue up weeks worth of review in a couple days, but that's not getting the whole team more productive.

Spending more time on documents and chatting proved much more useful for getting more output overall.

Even without LLMs ive been nearby and on teams where review burden from developers building away team code was already so high that youd need to bake an extra month into your estimates for getting somebody to actually look.

somewhereoutth · 5 days ago
> actually writing the code will often trigger some additional insight or awareness of edge cases that I hadn't considered.

Thinking through making.

hard24 · 6 days ago
My prediction is a concorde-like incident is going to shatter trust and make people re-think their expectations of the capabilities of LLMs and their abilities of the present.

Essentially something big has to happen that affects the revenue/trust of a large provider of goods, stemming from LLM-use.

They wont go away entirely. But this idea that they can displace engineers at a high-rate will.

Terr_ · 6 days ago
Assuming you mean this crash [0], it reads to me more like a confluence of bad events versus a big fundamental design flaw in the THERAC-25 mold.

I feel the current proliferation of LLMs is going to resemble asbestos problem: Cheap miracle thingy, overused in several places, with slow gradual regret and chronic harms/costs. Although I suppose the "undocumented nasty surprise" aspect would depend on adoption of local LLMs. If it's a monthly subscription to cloud-stuff, people are far less-likely to lose track of where the systems are and what they're doing.

[0] https://en.wikipedia.org/wiki/Air_France_Flight_4590

_wire_ · 6 days ago
Like bombing a building full of little kids? Oops too late...