Readit News logoReadit News
rebeccaskinner · 5 months ago
Looking at my own use of AI, and at how I see other engineers use it, it often feels like two steps forward and two steps back, and overall not a lot of real progress yet.

I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough.

Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time.

My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win.

ernst_klim · 5 months ago
> the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves

Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text.

You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact.

spwa4 · 5 months ago
The reason companies, or at least sales and marketing, are so incredibly after AI is that it can raise response rates on spam, and on ads, by "Hyper-personalizing" them by actually reading the social media accounts of the people looking at the ads and making ads directly based on that.
breakpointalpha · 5 months ago
Your millage may vary, but I just got Cursor (using Claude 4 Sonnet) to one shot a sequence of bash scripts that cleanup stale AWS resources. I pasted the Jira ticket description that I wrote, with a few examples and the script works perfectly. Saved me a few hours of bash writing and debugging because I can read bash, but not write it well.

It seems that the smaller the task and the more tightly defined the input and output, the better the LLMs are at one-shotting.

rebeccaskinner · 5 months ago
I’ve had similar experiences where AI saved me a ton of time when I knew what I wanted and understood the language or library well enough to review but poorly enough that I’d gave been slow writing it because I’d have spent a lot of time looking things up.

I’ve also had experiences where I started out well but the AI got confused, hallucinated, or otherwise got stuck. At least for me those cases have turned pathological because it always _feels_ like just one or two more tweaks to the prompt, a little cleanup, and you’ll be done, but you can end up far down that path before you realize that you need to step back and either write the thing yourself or, at the very least, be methodical enough with the AI that you can get it to help you debug the issue.

The latter case happens maybe 20% of the time for me, but the cost is high enough that it erases most of the time savings I’ve seen in the happy path scenario.

It’s theoretically easy to avoid by just being more thoughtful and active as a reviewer, but that reduces the efficiency gain in the happy path. More importantly, I think it’s hard to do for the same reason partially self driving cars are dangerous: humans are bad at paying attention well in “mostly safe and boring, occasionally disastrous” type settings.

My guess is that in the end we’ll see less of the problematic cases. In part because AI improves, and in part because we’ll develop better intuition for when we’ve stepped onto the unproductive path. I think a lot of it too will also be that we adopt ways of working that minimize the pathological “lost all day to weird LLM issues” problems by trying to keep humans in the loop more deeply engaged. That will necessarily also reduce the maximum size of the wins we get, but we’ll come away with a net positive gain in productivity.

washadjeffmad · 5 months ago
Same. I interface with a team who refuses to conduct business in anything other than Excel, and because of dated corporate mindshare, their management sees them more as wizards instead of the odd ones out.

"They're on top of it! They always email me the new file when they make changes and approve my access requests quickly."

There are limits to my stubbornness, and my first use of LLMs for coding assistance was to ask for help figuring out how to Excel, after a mere three decades of avoidance.

After engaging and learning more about their challenges, it turned out one of their "data feeds" was actually them manually copy/pasting into a web form with a broken batch import that they'd give up on submitting project requests for, which I quietly fixed so they got to retain their turnaround while they planned some other changes.

Ultimately nothing grand, but I would never have bothered if I'd had to wade through the usual sort of learning resources available or ask another person. Being able to transfer and translate higher level literacy, though, is right up my alley.

jdiff · 5 months ago
That's a dangerous game to play with Bash, I'm not sure if there's another language more loaded with footguns than that.
insane_dreamer · 5 months ago
I've found it to be a significant productivity boost but only for a small subset of problems. (Things like bash scripts, which are tedious to write and I'm not that great at bash. Or fixing small bugs in a React app, a framework I'm not well versed in. But even then I have to keep my thinking cap on so it doesn't go off the rails.)

It works best when the target is small and easily testable (without the LLM being able to fudge the tests, which it will do.)

For many other tasks it's like training an intern, which is worth it if the intern is going to grow and take on more responsibility and learn to do things correctly. But since the LLM doesn't learn from its mistakes, it's not a clear and worthwhile investment.

DanielHB · 5 months ago
I have found out that the limit of LLMs good use of coding abilities is basically what can be reasonably done as a single copy-paste. Usually only individual functions.

I basically use it for google on steroids for obscure topics, for simple stuff I still use normal search engines.

CyberMacGyver · 5 months ago
Our new CTO was remarking that our engineering teams AI spend is too low. I believe we have already committed a lot of money but only using 5% of the subscription.

This is likely why there is a lot of push from the top. They have already committed the money now having to justify it.

hn_throwaway_99 · 5 months ago
> They have already committed the money now having to justify it.

As someone who has been in senior engineering management, it's helpful to understand the real reason, and this is definitely not it.

First, these AI subscriptions are usually month-to-month, and these days with the AI landscape changing so quickly, most companies would be reluctant to lock in a longer term even if there were a discount. So it's probably not hard to quickly cancel AI spend for SaaS products.

Second, the vast majority of companies understand sunk cost fallacy. If they truly believed AI wouldn't be a net benefit, they wouldn't force people to use it just because they already paid for it. Salaries for engineers are a hell of a lot more than their AI costs.

The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.

I'm not at all saying you have to buy into this "FOMO rationale", but just saying "they already paid the money so that's why they want us to use it" feels like a bad excuse and just broadcasts a lack of understanding of how the vast majority of businesses work.

empiko · 5 months ago
Agreed. I think that many companies force people to use AI in hopes that somebody will stumble upon a killer use case. They don't want competitors to get there first.
pseudalopex · 5 months ago
It's incomplete but not false universally. Politics is part of how businesses work. Many companies which adopted AI now expected results now. People who promised results now have reputations on the line. Incentives influence beliefs.
lelanthran · 5 months ago
> but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.

This makes no sense for coding subscriptions. Just how far behind can you be in skills by taking a wait and see position?

After all, it's not like this specific product needs more than a single day for the user to get up to speed.

watwut · 5 months ago
Companies do not necessarily understand sunk cost fallacy.

> ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage

But more importantly, this is completely inconsistent with how banks approach any other programming tool or how they approach lifelong learning. They are 100% comfortable with people not learning on the job in just about any other situation.

rsynnott · 5 months ago
> Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.

This doesn't make a huge amount of sense, because the stuff is changing so quickly anyway. It's far from clear that, in the hypothetical future where this stuff is net-useful in five years, experience with _today's_ tools will be of any real use at all.

nelox · 5 months ago
> The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.

Yes, this is the correct answer.

ajcp · 5 months ago
> this is definitely not it.

> is probably because

I don't mean to be contrary, but these statements stand in opposition, so I'm not sure why you are so confidently weighing in on this.

Also, while I'm sure you've "been in senior engineering management", it doesn't seem like you've been in an organization that doesn't do engineering as it's product offering. I think this article is addressing the 99% of companies that have some amount of engineers, but does not do engineering. That is to say: "My company does shoes. My senior leadership knows how to do shoes. I don't care about my engineering prowess, we do shoes. If someone says I can spend less on the thing that isn't my business (engineering) then yes, I want to do that."

Dead Comment

sschnei8 · 5 months ago
Do you have any data to backup the claim: “vast majority of companies understand suck cost fallacy.”

I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy.

vjvjvjvjghv · 5 months ago
Wish my company did this. I would love to learn more about AI but the company is too cheap to buy subscriptions
foogazi · 5 months ago
Can you buy a subscription and see if it benefits you ?
discordance · 5 months ago
This comes to mind: "MIT Media Lab/Project NANDA released a new report that found that 95% of investments in gen AI have produced zero returns" [0]

Enterprise is way too cozy with the big cloud providers, who bought into it and sold it on so heavily.

0: https://fortune.com/2025/08/18/mit-report-95-percent-generat...

matwood · 5 months ago
I wonder people ever read what they link.

> The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.

The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. Large enterprises being bad at integration is a story as old as time. IMO, reading beyond the headline, the report highlights the value of today's AI tools because they are leading to enterprises trying to integrate faster than they normally would.

"AI tools found to be useful, but integration is hard like always" is a headline that would have gotten zero press.

pseudalopex · 5 months ago
> The 95% isn't a knock on the AI tools, but that enterprises are bad at integration.

You could read this quote this way. But the report knocked the most common tools.

The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap, tools that don't learn, integrate poorly, or match workflows. Users prefer ChatGPT for simple tasks, but abandon it for mission-critical work due to its lack of memory. What's missing is systems that adapt, remember, and evolve, capabilities that define the difference between the two sides of the divide.[1]

[1] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...

bawolff · 5 months ago
If the theory is that 1% will be a unicorns that will make you a trillionaire, i think investors would be ok with that.

The real question is do those unicorns exist or is it all worthless.

orionblastar · 5 months ago
Have to pay the power bill for the data centers for GAI. Might not be profitable.
thenaturalist · 5 months ago
Fun fact, the report was/ is so controversial, that the link to the NANDA paper linked in fortune has been put behind a Google Form you now need to complete prior to being able to access it.
losteric · 5 months ago
Doubt the form has anything to do with how "controversial" it is. NANDA is using the paper's popularity to collect marketing data.
gamblor956 · 5 months ago
Fun fact, it was always behind a form if you wanted to access it through the original/primary link.
nelox · 5 months ago
The claim that big US companies “cannot explain the upsides” of AI is misleading. Large firms are cautious in regulatory filings because they must disclose risks, not hype. SEC rules force them to emphasise legal and security issues, so those filings naturally look defensive. Earnings calls, on the other hand, are overwhelmingly positive about AI. The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place. Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. These are significant operational changes.

It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.

The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.

rsynnott · 5 months ago
> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.

There was a weird moment in the late noughties where seemingly every big consumer company was creating a presence in Second Life. There was clearly a lot of strategic interest...

Second Life usage peaked in 2009 and never recovered, though it remains somewhat popular amongst furries.

Bizarrely, this kind of happened _again_ with the very similar "metaverse" stuff a decade or so later, though it burned out somewhat quicker and never hit the same levels of farcical nonsense; I don't think any actual _countries_ opened embassies in "the metaverse", say (https://www.reuters.com/article/technology/sweden-first-to-o...).

julkali · 5 months ago
The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs.

Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.

comp_throw7 · 5 months ago
(You're responding to an LLM-generated comment, btw.)
Frieren · 5 months ago
> Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction.

But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI.

> Failures of workplace pilots usually result from integration challenges, not because the technology lacks value.

Bold claim. Toxic positivism seems to be too common in AI evangelists.

> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.

If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it.

jraby3 · 5 months ago
As a small business owner in a non tech business (60 employees, $40M revenue), AI is definitely worth $20/month but not as I anticipated.

I thought we'd use it to reduce our graphics department but instead we've begun outsourcing designers to Colombia.

What I actually use it for is to save time and legal costs. For example a client in bankruptcy owes us $20k. Not worth hiring an attorney to walk us through bankruptcy filings. But can easily ask ChatGPT to summarize legal notices and advise us what to do next as a creditor.

flohofwoe · 5 months ago
Which summarizes the one useful property of LLMs: a slightly better search engine which on top doesn't populate the first 5 result pages with advertisements - yet anyway ;)
Gud · 5 months ago
The saddest part is we used to have highly functional search engines two decades ago, where you would results from subject matter experts.

Today it’s only the same SEO formatted crap without answer.

I am working on a solution.

jordanb · 5 months ago
No doubt in ten years chatgpt will mostly be telling you things it was paid to say.
pjc50 · 5 months ago
> Not worth hiring an attorney to walk us through bankruptcy filings

The AI doesn't carry professional liability insurance, so this is about as good as asking one of the legal subreddits. It's probably fine in this case since the worst case is not getting the money that you were at risk of not getting anyway.

mgh2 · 5 months ago
Lauris100 · 5 months ago
Thank you
vjvjvjvjghv · 5 months ago
This reminds me of the internet in 2000. Lots of companies doing .COM stuff but many didn’t understand what they were doing and why they were doing it. But in the end the internet was a huge game changer. I see the same with AI. There will be a lot of money wasted but in the end AI will be huge transformation.
marklubi · 5 months ago
> This reminds me of the internet in 2000

The thing that changed it was smartphones (~7 years later). Suddenly, the internet was available everywhere and not just a thing for nerds.

Not sure that AI is quite there yet, currently trying to identify what will be the catalyst that makes it seamless.

kilroy123 · 5 months ago
I completely agree. I think the financial bubble will also burst soon. Doesn't mean it won't keep on slowly eating the world.
01100011 · 5 months ago
AI isn't about what you are able to do with it. AI is about the fear of what your competitors can do with it.

I said a couple years ago that the big companies would have trouble monetizing it, but they'd still be forced to spend for fear of becoming obsolete.