At every company I’ve ever worked for, the bottleneck is not “how fast can we spit out more code?” It’s always: “how fast can the business actually decide what they want and create a good backlog?”
Maybe startup development will significantly accelerate with AI churning out all the boilerplate to get your app started.
But enterprise development, where the app is already there and you’re building new features on top of a labyrinthian foundation, is a different beast. The hard part is sitting through planning meetings or untangling weird system dependencies, not churning out net new code. My two cents anyway.
As a PM I have never not had a backlog of little stuff we'd love to do but can't justify prioritizing. I've also almost always had developers who want to make improvements to the codebase that don't get prioritized because we need new features.
The upside is that both of these things are the kind of tasks that are probably good to give to AI. I've always got little UI bugs that bother me every time I use our application but don't actually break anything and thus won't impact revenue and never get done.
I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Both of those cases feel like places where AI probably gets the job done.
That sounds good, but if you have a PMO and an enterprise Change Control Board that controls your not-quite-CI/CD deployments, you may find yourself hamstrung. I've been in that position before, where there was simultaneously a bottleneck of clear requirements and also a bunch of stuff (tech debt, small features, bug fixes, UI tweaks) sitting and waiting on a branch ready to deploy when downtime was finally approved. Or, situations where enterprise policy requires human SQA signoff on everything going to prod. There are lots of places you can create inefficiencies in the system and lack of approved requirements is just one.
> developers who want to make improvements to the codebase that don't get prioritized
So, to clarify – developers want to make improvements to the codebase, and you want to give that work to AI? Have you never been in the shoes of making an improvement or a suggestion for a project that you want to work on, seeing it given to somebody else, and then being assigned just more slog that you don't want to do?
I mean, I'm no PM, but that certainly seems like a way to kill team morale, if nothing else.
> I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Blows my mind to think that those are the things you want to give to AI. I'd quit.
I never worked at a place where not having a backlog was an issue. Quite the opposite in fact - there’s always infinite backlogs of stuff. Every single time I’ve seen organizations being slow to decide anything, it was due to the human tendency to stretch their tasks to occupy as much time as possible. Planning meetings are “the work” for a legion of people (even though they also know they’re mostly pointless). Untangling dependencies is harder when it involves approvals of other humans (particularly fun as multiple people are “the tech lead”, are all objectively wrong but unable to see how they’re simply getting in the way).
I don’t think LLMs are particularly smart, or capable of, or will definitely replace humans at anything, or if they’ll lead to better work. But I can already tell that their inherent lack of an ego DO accelerate things at enterprises, for the simple reason that the self-imposed roadblocks above stop happening
At my current workplace, we do have a roadmap for the business, but the actual backlog of tickets to implement work is all waiting on other siloed teams to make decisions that we are downstream of. This ranges from our infrastructure model to simple things like “which CSS components are we allowed to use.”
We are also explicitly NOT allowed to make any code changes that aren’t part of a story that our product owner has approved and prioritized.
The result is that we scrape together some stories to work on every sprint, but if we finish it early, we quickly run into red tape and circular conversations with other “decision makers” who need to tell us what we’re allowed to do before we actually do anything.
It’s fairly maddening. The whole org is hamstrung by a few workaholic individuals who control decision making for several teams and are chronically unavailable as a result.
I’ve seen this sort of thing happen at other big enterprises too but my current situation is perhaps an extreme example of dysfunction. Point being, when an org gets tangled up like this, LLMs aren’t gonna save it :)
I see several folks commenting on this from the perspective of software engineering. Keep in mind that those are a small minority of Amazon's enormous workforce: an estimate a few years back [0] was 3.5%.
[0] https://newsletter.pragmaticengineer.com/p/amazon
The markets seem to like it, so if you go "we're going AI-first!" every six months, you'll get a little stock price boost. Actually _doing_ anything, naturally, is entirely optional.
Expect this to repeat until the markets choose a new favourite thing (I'm betting on "quantum"; it's getting a lot of press lately and is nicely vague.)
I find it extremely strange that a company leader though it would be ok to just say "our financial situation is in a place where we cannot adequately staff our teams". The market clearly thought it was strange as well, given their stock performance today.
Really bad look and poor leadership from Jassy. There's a good way to frame adoption of AI, but this is not it.
Small and scrappy teams work when the team has less than 8 hours of corporate busywork to do a day (Jira, compliance training, triaging 10k alerts from the new scanning software, etc)
we really need immigration reform. companies prefer H1B workers because they can treat them like indentured servants: they're bound to the company that sponsored their visa, and have only 60 days to find a new job if fired or they'll be deported. companies can also reset the green card process in retaliation if they do leave.
I'm radically pro-immigrant. I want the smartest people from around the world to come work here. I want to unshackle them from their corporate sponsors. the current system is unfair to immigrants (who are bound like serfs to their workplace) and to citizens (who lose jobs because corporations prefer serfs.)
I'm really surprised there isn't more pushback to the program since it has aspects that piss off post political sides. Maybe it's just too wonky for mainstream political coverage. A system of indentured servants really is the best description, the potential for abuse is both obvious and widespread. For the other side of course they can jobs from Americans in many cases. Big tech companies love hiring people they can abuse, especially if they can also pay them less than local hires.
My entire old team at Amazon has been reduced from 8 people of which 5 were citizens (and one got his green card while I was there) to 2 immigrants who arrived right before the pandemic both from different at-war countries. I only know this because after the last round of layoffs one of them reached out to me asking if I could get him out of that hell. Seems pretty straightforward what has happened here.
Amazon has a document writing culture, all of those documents will be written by AI. People have built careers on writing documents. Same with operations, its all about audit logs. Internally, there are MCPs that have already automated TPMs/PMs/Oncall/maintenance coding. Some orgs in AWS are 90% foreign, there is fear about losing visa status and going back, the automation is just beginning. Sonnet 4 felt like the first time MCPs could actually be used to automate work.
A region expansion scoping project in AWS that required detailed design and inspection of tens of code bases was done in a day, it would usually require two or three weeks of design work.
The automation is real, and the higher are ups are directly monitoring token usage in their org, and pushing senior engineers to increase Q/token usage metrics among low level engineers. Most orgs have a no backfill policy for engineers leaving, they are supplimenting staffing needs with indian contractors, the expectation being that fewer engineers will be needed in a years time.
Replacing the topic word “says” with “hopes” is a more precise statement about the mindset driving the creative theft behind AI; only the hope of deprecating all skilled workers in America with one technological advancement, without loss of gross revenue, could justify as severe a gamble as corporations are taking on it.
At this point im convinced that these sorts of headlines are being intentionally put out there as a form of marketing via fear.
What better way to convince people to learn/use your AI offerings than to have those people think their livelihoods are in danger because of them.
AI has provided alot of unique value, but despite the countless headlines stoking fear of mass job loss, there still remains little substance to these claims of being able to automate anything but the most meanial of jobs.
Until we can directly point the finger to AI as the cause of job loss numbers rising, and not other unrelated economic factors, this all just smells of fear mongering with a profit incentive.
Ceos always are personally marketing their "leadership" to maintain position with the board and stockholders and for future jobs, and to push around peers in the sociopath executive class.
These people universally hate labor.
The entire tech industry went on a firing binge when musk bought Twitter and fired everyone, and nazi salutes have done a bit to blunt his golden boy status in the exec ranks, not THAT much...
Now every CEO is trying to elbow their way to be the AI golden boy. It's worth tens of billions as musk had shown.
If that's all he sees, it's a hilariously myopic take on the impact of AI.
AI is for coding velocity like electricity is for better room lighting.
We haven't seen the nature of work after AI yet, we're still in a nascent phase. Consider every single white collar role, process, worfklow in your organization up for extreme disruption during this transition period, and it will take at least a decade to even begin to sort out.
I like this metaphor about electric lighting. However, having lived in two ~1850 houses, they sure look and function a lot like they did before electricity, despite nearly every element having been “disrupted” by electricity and all the rest.
Maybe startup development will significantly accelerate with AI churning out all the boilerplate to get your app started.
But enterprise development, where the app is already there and you’re building new features on top of a labyrinthian foundation, is a different beast. The hard part is sitting through planning meetings or untangling weird system dependencies, not churning out net new code. My two cents anyway.
The upside is that both of these things are the kind of tasks that are probably good to give to AI. I've always got little UI bugs that bother me every time I use our application but don't actually break anything and thus won't impact revenue and never get done.
I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Both of those cases feel like places where AI probably gets the job done.
So, to clarify – developers want to make improvements to the codebase, and you want to give that work to AI? Have you never been in the shoes of making an improvement or a suggestion for a project that you want to work on, seeing it given to somebody else, and then being assigned just more slog that you don't want to do?
I mean, I'm no PM, but that certainly seems like a way to kill team morale, if nothing else.
> I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Blows my mind to think that those are the things you want to give to AI. I'd quit.
I don’t think LLMs are particularly smart, or capable of, or will definitely replace humans at anything, or if they’ll lead to better work. But I can already tell that their inherent lack of an ego DO accelerate things at enterprises, for the simple reason that the self-imposed roadblocks above stop happening
We are also explicitly NOT allowed to make any code changes that aren’t part of a story that our product owner has approved and prioritized.
The result is that we scrape together some stories to work on every sprint, but if we finish it early, we quickly run into red tape and circular conversations with other “decision makers” who need to tell us what we’re allowed to do before we actually do anything.
It’s fairly maddening. The whole org is hamstrung by a few workaholic individuals who control decision making for several teams and are chronically unavailable as a result.
I’ve seen this sort of thing happen at other big enterprises too but my current situation is perhaps an extreme example of dysfunction. Point being, when an org gets tangled up like this, LLMs aren’t gonna save it :)
Though AI will probably just proactively add features and open PRs and people can choose
Which I expect will be the gist of management consulting reports for the next decade.
If human decision-makers become the bottleneck... eventually that will be reengineered.
I'm fascinated to imagine what change control will need to look like in a majority-AI scenario. Expect there will be a lot more focus on TDD.
Isn’t this their general approach since forever?
Somehow they want to act like they are making a shift, rather than say they were ahead of the trend.
The wording changes, the intention doesn't.
If they could pay you nothing they would.
Expect this to repeat until the markets choose a new favourite thing (I'm betting on "quantum"; it's getting a lot of press lately and is nicely vague.)
Really bad look and poor leadership from Jassy. There's a good way to frame adoption of AI, but this is not it.
For 6/17, the S&P 500 was down 0.84%, QQQ (Nasdaq stocks) was down 0.98% and AMZN was down 0.59%.
AMZN slightly outperformed the market today.
Dead Comment
I'm radically pro-immigrant. I want the smartest people from around the world to come work here. I want to unshackle them from their corporate sponsors. the current system is unfair to immigrants (who are bound like serfs to their workplace) and to citizens (who lose jobs because corporations prefer serfs.)
Dead Comment
Amazon has a document writing culture, all of those documents will be written by AI. People have built careers on writing documents. Same with operations, its all about audit logs. Internally, there are MCPs that have already automated TPMs/PMs/Oncall/maintenance coding. Some orgs in AWS are 90% foreign, there is fear about losing visa status and going back, the automation is just beginning. Sonnet 4 felt like the first time MCPs could actually be used to automate work.
A region expansion scoping project in AWS that required detailed design and inspection of tens of code bases was done in a day, it would usually require two or three weeks of design work.
The automation is real, and the higher are ups are directly monitoring token usage in their org, and pushing senior engineers to increase Q/token usage metrics among low level engineers. Most orgs have a no backfill policy for engineers leaving, they are supplimenting staffing needs with indian contractors, the expectation being that fewer engineers will be needed in a years time.
AI has provided alot of unique value, but despite the countless headlines stoking fear of mass job loss, there still remains little substance to these claims of being able to automate anything but the most meanial of jobs. Until we can directly point the finger to AI as the cause of job loss numbers rising, and not other unrelated economic factors, this all just smells of fear mongering with a profit incentive.
These people universally hate labor.
The entire tech industry went on a firing binge when musk bought Twitter and fired everyone, and nazi salutes have done a bit to blunt his golden boy status in the exec ranks, not THAT much...
Now every CEO is trying to elbow their way to be the AI golden boy. It's worth tens of billions as musk had shown.
AI is for coding velocity like electricity is for better room lighting.
We haven't seen the nature of work after AI yet, we're still in a nascent phase. Consider every single white collar role, process, worfklow in your organization up for extreme disruption during this transition period, and it will take at least a decade to even begin to sort out.