Readit News logoReadit News
PhantomHour commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
no_wizard · 2 days ago
I can’t think of another career where management continuously does not understand the realities of how something gets built. Software best practices are on their face orthogonal to how all other parts of a business operate.

How does marketing operate? In a waterfall like model. How does finance operate? In a waterfall like model. How does product operate? Well you can see how this is going.

Then you get to software and it’s 2 week sprints, test driven development etc. and it decidedly works best not on a waterfall model, but shipping in increments.

Yet the rest of the business does not work this way, it’s the same old top down model as the rest.

This I think is why so few companies or even managers / executives “get it”

PhantomHour · 2 days ago
> I can’t think of another career where management continuously does not understand the realities of how something gets built.

This is in part a consequence of how young our field is.

The other comment pointing out other engineering is right here. The difference is that fields like Civil Engineering are millenia old. We know that Egyptian civil engineering was advanced and shockingly modern even 4.5 millenia ago. We've basically never stopped having qualified civil engineers around who could manage large civil engineering projects & companies.

Software Development in it's modern forms has it's start still in living memory. There simply weren't people to manage the young early software development firms as they grew, so management got imported from other industries.

And to say something controversial: Other engineering has another major reason why it's usually better understood. They're held to account when they kill people.

If you're engineering a building or most other things, you must meet safety standards. Where possible you are forced to prove you meet them. E.g. Cars.

You don't get to go "Well cars don't kill people, people kill people. If someone in our cars die when they're hit by a drunk driver, that's not our problem that's the drunkard's fault." No. Your car has to hold up to a certain level of crash safety, even if it's someone else who causes the accent, your engineering work damn better hold up.

In software, we just do not do this. The very notion of "Software kills people" is controversial. Treated as a joke, "of course it can't kill people, what are you on about?". Say, you neglect on your application's security. There's an exploit, a data breach, you leak your users' GPS location. A stalker uses the data to find and kill their victim.

In our field, the popular response is to go "Well we didn't kill the victim, the stalker did. It's not our problem.". This is on some level true; 'Twas the drunk driver who caused the car crash, not the car company. But that doesn't justify the car company selling unsafe cars, why should it justify us selling unsafe software? It may be but a single drop of blood, but it's still blood on our hands as well.

As it stands, we are fortunate enough that there haven't been incidents big enough to kill so many people that governments take action to forcibly change this mindset. It would be wise that Software Development takes up this accountability on it's own accord to prevent such a disaster.

PhantomHour commented on Mark Zuckerberg freezes AI hiring amid bubble fears   telegraph.co.uk/business/... · Posted by u/pera
NickC25 · 2 days ago
As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.

He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.

PhantomHour · 2 days ago
Zuckerberg started as a sex pest and got not an iota better.

But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds.

PhantomHour commented on In a first, Google has released data on how much energy an AI prompt uses   technologyreview.com/2025... · Posted by u/jeffbee
dylan604 · 2 days ago
> The way around that is that is for LLM-based tools to run a regular search engine query in the background and feed the results of that in alongside the prompt. (Usually a two-step process of the LLM formulating the query, then another pass on the results)

Would the LLM-based tool be able to determine that the top results are just SEO-spam sites and move lower in the list, or just accept the spam results as gospel?

PhantomHour · 2 days ago
This is an extremely tricky question.

The practical and readily-observable-from-output answer is "No, they cannot meaningfully identify spam or misinformation, and do indeed just accept the results as gospel"; Google's AI summary works this way and is repeatedly wrong in exactly this way. Google's repeatedly had it be wrong even in the adcopy.

The theoretical mechanism is that the attention mechanism with LLMs would be able to select which parts of the results are fed further into the results. This is how the model is capable of finding parts of the text that are "relevant". The problem is that this just isn't enough to robustly identify spam or incorrect information.

However, we can isolate this "find the relevant bit" functionality away from the rest of the LLM to enhance regular search engines. It's hard to say how useful this is; Google has intentionally damaged their search engine and it may simply not be worth the GPU cycles compared to traditional approaches, but it's an idea being widely explored right now.

PhantomHour commented on In a first, Google has released data on how much energy an AI prompt uses   technologyreview.com/2025... · Posted by u/jeffbee
Octoth0rpe · 2 days ago
> an LLM would have linked to the original study

There is a non-trivial chance that the LLM would've added a link to _something_, but links/references seem like a very common thing to hallucinate, no?

PhantomHour · 2 days ago
The way around that is that is for LLM-based tools to run a regular search engine query in the background and feed the results of that in alongside the prompt. (Usually a two-step process of the LLM formulating the query, then another pass on the results)

The used results can then have their link either added to the end result separately, guaranteeing it is correct, or added to the prompt and "telling the LLM to include it", which retains a risk of hallucination, yes.

Common to both of these is the failure mode that the LLM can still hallucinate whilst "summarizing" the results, meaning you still have no guarantee that the claims made actually show up in the results.

PhantomHour commented on In a first, Google has released data on how much energy an AI prompt uses   technologyreview.com/2025... · Posted by u/jeffbee
PhantomHour · 2 days ago
One thing I'm missing in the full report is what a 'median prompt' actually looks like. How many tokens? What's the distribution of prompt sizes like? Is it even the same 'median prompt' between 2024 and 2025?

The numbers are cute but we can't actually do anything with them without those details. At least an average could be multiplied by the # of queries to get the total usage.

PhantomHour commented on Mark Zuckerberg freezes AI hiring amid bubble fears   telegraph.co.uk/business/... · Posted by u/pera
torginus · 2 days ago
It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).
PhantomHour · 2 days ago
The answer is fairly straightforward. It's fraud, and lots of it.

A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.

A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.

A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".

Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.

The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.

PhantomHour commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
kamaal · 2 days ago
Most people don't notice but there has been a inflation in headcounts over the years now. This happened around the time microservices architecture trend took over.

All of sudden to ensure better support and separation of concerns people needed a team with a manager for each service. If this hadn't been the case, the industry as a whole can likely work with 40% - 50% less people eventually. Thats because at any given point in time even with a large monolithic codebase only 10 - 20% of the code base is in active evolution, what that means in microservices world is equivalent amount teams are sitting idle.

When I started out huge C++ and Java code bases were pretty much the norm, and it was also one of the reasons why things were hard and barrier to entry high. In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.

To me its these kind of places that are in real trouble. There is not enough work to justify keeping dozens to even hundreds of teams, their managements and their hierarchies all working for quite literally doing nothing.

PhantomHour · 2 days ago
> In this microservices world, things are small enough that any small group of even low productivity employees can make things work. That is quite literally true, because smaller things that work well don't even need all that many changes on a everyday basis.

You're committing the classic fallacy around microservices here. The services themselves are simpler. The whole software is not.

When you take a classic monolith and split it up into microservices that are individually simple, the complexity does not go away, it simply moves into the higher abstractions. The complexity now lives in how the microservices interact.

In reality, the barrier to entry on monoliths wasn't that high either. You could get "low productivity employees" (I'd recommend you just call them "novices" or "juniors") to do the work, it'd just be best served with tomato sauce rather than deployed to production.

The same applies to microservices. You can have inexperienced devs build out individual microservices, but to stitch them together well is hard, arguably harder than ye-olde-monolith now that Java and more recent languages have good module systems.

PhantomHour commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
jqpabc123 · 2 days ago
He wants educators to instead teach “how do you think and how do you decompose problems”

Ahmen! I attend this same church.

My favorite professor in engineering school always gave open book tests.

In the real world of work, everyone has full access to all the available data and information.

Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.

Doing this is called "engineering". And this is what this professor taught.

PhantomHour · 2 days ago
It's the core problem facing the hiring practices in this field. Any truly competent developer is a generalist at heart. There is value to be had in expertise, but unless you're dealing with a decade(s) old hellscape of legacy code or are pushing the very limits of what is possible, you don't need experts. You'd almost certainly be better off with someone who has experience with the tools you don't use, providing a fresh look and cover for weaknesses your current staff has.

A regular old competent developer can quickly pick up whatever stack is used. After all, they have to; Every company is their own bespoke mess of technologies. The idea that you can just slap "15 years of React experience" on a job ad and that the unicorn you get will be day-1 maximally productive is ludicrous. There is always an onboarding time.

But employers in this field don't "get" that. For regular companies they're infested by managers imported from non-engineering fields, who treat software like it's the assembly line for baking tins or toilet paper. Startups, who already have fewer resources to train people with, are obsessed with velocity and shitting out an MVP ASAP so they can go collect the next funding round. Big Tech is better about this, but has it's own problems going on and it seems that the days of Big Tech being the big training houses is also over.

It's not even a purely collective problem. Recruitment is so expensive, but all the money spent chasing unicorns & the opportunity costs of being understaffed just get handwaved. Rather spend $500,000 on the hunt than $50,000 on training someone into the role.

And speaking of collective problems. This is a good example of how this field suffers from having no professional associations that can stop employers from sinking the field with their tragedies of the commons. (Who knows, maybe unions will get more traction now that people are being laid off & replaced with outsourced workers for no legitimate business reason.)

PhantomHour commented on AI is predominantly replacing outsourced, offshore workers   axios.com/2025/08/18/ai-j... · Posted by u/toomuchtodo
crazygringo · 5 days ago
> What are you talking about? The return on investment from computers was immediate and extremely identifiable.

It is well-documented, and called the "productivity paradox of computers" if you want to look it up. It was identified in 1987, and economic statistics show that personal computing didn't become a net positive for the economy until around 1995-1997.

And like I said, it's very dependent on the individual company. But consider how many businesses bought computers and didn't use them productively. Where it was a net loss because the computers were expensive and the software was expensive and the efficiency gained wasn't worth the cost -- or worse, they weren't a good match and efficiency actually dropped. Think of how many expensive attempted migrations from paper processes to early databases failed completely.

PhantomHour · 5 days ago
It's well documented. It's also quite controversial and economists still dispute it to this day.

It's economic analysis of the entire economy, from the "outside" (statistics) inward. My point is that the individual business case was financially solvent.

Apple Computer did not need to "change the world" it needed to sell computers at a profit, enough of them to cover their fixed costs, and do so without relying on other people just setting their money on fire. (And it succeeded on all three counts.) Whether or not they were a minute addition to the entire economy or a gigantic one is irrelevant.

Similarly with AI. AI does not need to "increase aggregate productivity over the entire economy", it needs to turn a profit or it dies. Whether or not it can keep the boomer pension funds from going insolvent is a question for economics wonks. Ultimately the aggregate economic effects follow from the individual one.

Thus the difference. PCs had a "core of financial solvency" nearly immediately. Even if they weren't useful for 99.9% of jobs that 0.1% would still find them useful enough to buy and keep the industry alive. If the hype were to run out on such an industry, it shrinks to something sustainable. (Compare: Consumer goods like smartwatches, which were hyped for a while, and didn't change the world but maintained a suitable core audience to sustain the industry)

With AI, even AI companies struggle to pitch such a core, nevermind actually prove it.

PhantomHour commented on AI is predominantly replacing outsourced, offshore workers   axios.com/2025/08/18/ai-j... · Posted by u/toomuchtodo
simianwords · 5 days ago
I highly doubt that the return in investment was seen immediately for personal computers. Do you have any evidence? Can you show me a company that adopted personal computers and immediately increased its profits? I’ll change my mind.
PhantomHour · 5 days ago
I'm sorry but you're asking me here to dig up decades old data to justify my claim that "The spreadsheet software has an immediately identifiable ROI".

I am not going to do that. If you won't take it at my word that "computer doing a worksheet's of calculations automatically" is faster & less error-prone than "a human [with electronic calculator] doing that by hand", then that's a you problem.

An apple II cost $1300. VisiCalc cost $200. An accountant in that time would've cost ~10x that annually and would either spend quite a bit more than 10% doing the rote work, or hire dedicated people for it.

u/PhantomHour

KarmaCake day79August 11, 2025View Original