Readit News logoReadit News
mark_l_watson · 20 days ago
I’ll do the Minority Report here: I loved the article, the point being that rich people hyping AI for their own enrichment have somewhat shutdown rational arguments of benefits vs. costs, the costs being: energy use, environmental impact of using environmentally unfriendly energy sources out of desperation, water pollution from by products of electronics production and recycling and from water use in data centers, diverting money from infrastructure and social programs, putting more debt stress on society, etc.

I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.

I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.

ughitsaaron · 20 days ago
I just want to call out how much I appreciate the comparison of “vibe coding” to the endless scroll.
andai · 20 days ago
They're both slot machines, in terms of the effect on the reward system.
addled · 20 days ago
Agreed. I noticed myself having a harder time stopping at the end of the day since I started using AI tools in earnest.

I naturally have a hard time stopping when almost done with something, but with AI everything feels "close" to a big breakthrough.

Just one more turn... Until suddenly it's way later than I thought and hardly have time to interact with my family.

boxedemp · 20 days ago
I've literally not met one person in tech who thinks LLMs will become sentient or conscious. But I always see people online claiming that there are lots of people who believe that.

Where are they?

Are we sure that's not a misunderstanding of the terminology? Artificial diamonds, such as cubic zirconia, are not diamonds, and nobody thinks they are. 'Artificial' means it's not the real thing. When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?

Incidentally, this comment was written by AI.

grogers · 20 days ago
It's not your main point, but I can't help but point out that artificial diamonds ARE diamonds. Cubic zirconia is a different mineral. Usually the distinction is "natural" vs "lab grown" diamonds.

When computers have super-human level intelligence, we might be making similar distinctions. Intelligence IS intelligence, whether it's from a machine or an organism. LLMs might not get us there but something machine will eventually.

andai · 20 days ago
Interesting. Artificial does have a negative connotation to it, I never considered that.

Synthetic sounds more neutral, aside from bringing microplastics to my mind.

I guess the field of artificial life has the same issue.

As another comment pointed out, you don't necessarily need consciousness for intelligence. And you don't need either of those for goal oriented behavior.

My favorite example is the humble refrigerator. (The old one, without the microchips!) It has a goal (target temperature), it senses its environment (current temperature), and takes action based on that (turn cooling on or off).

A cuter example is the dandelion seed. It "wants" to fly. Obviously! So you can display goal directed behavior as the result of natural forces moving through you. (Arguably electricity and glucose also fall in that category, but... Yeah...)

LLMs, conscious or not, moved into that category this year, in a big way. (e.g. Opus and Codex routinely bypassing security restrictions in the pursuit of the goal.)

Does it really have goals, or does it merely appear to act as though it has them? Does it appear to act as though it has consciousness?

(I forget who said it: it won't really disrupt the global economic system, it will merely appear to do so ;)

Also, here I am! :)

palmotea · 20 days ago
> I've literally not met one person in tech who thinks LLMs will become sentient or conscious. But I always see people online claiming that there are lots of people who believe that.

I haven't met him, but a famous (pre-ChatGPT) counterexample is Blake Lemoine:

> In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. (https://en.wikipedia.org/wiki/LaMDA).

It's also not uncommon here to see someone respond to a comment questioning the consciousness or sentience of LLMs with the question along the lines of "how do you know anyone is conscious/sentient?" They're not being direct with their beliefs (I believe as a kind of motte and bailey tactic), but the implication is they think LLM are sentient and bristle when someone suggests otherwise.

sshine · 20 days ago
> When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?

One can bypass the whole sentience discussion and say that AI stands for Automated Inference.

If actual, conscious intelligence were to manifest synthetically, as in silicon-based rather than carbon-based, it is a losing battle to convince people because of the philosophical “problem of other minds.”

If there is a functional equivalence between meatspace intelligence and synthetic, it will surely have enough value to reinforce itself, philosophical debates aside.

tim333 · 20 days ago
AI becoming conscious is different to LLMs doing so. Maybe more people are claiming that? I think AI will but LLMs won't.

It depends a bit what you mean by conscious but assuming it's human like then it incorporates a lot of feelings, vision, sound, thoughts and the like, things that are not language really. But we do it with neurons and some chemicals and I imagine you could do something like that with artificial neural networks and some computer version of the chemistry, but not just language really.

mullingitover · 20 days ago
> LLMs will become sentient or conscious

I've always doubted it, but then again I've also been skeptical about claims that humans have these capabilities.

rickydroll · 20 days ago
An interesting parallel would be to look at what it took for humans to accept that sapience existed in non-humans, especially non-human primates.

On terminology, I would argue for non-biological intelligence. People can be awfully bioist (biological racist).

jamesfinlayson · 20 days ago
> But I always see people online claiming that there are lots of people who believe that.

I saw someone on the news claiming this recently, but he ran an AI consultancy firm so I suspect he was trying to drum up business.

melagonster · 20 days ago
>LLMs will become sentient or conscious.

People who declare that AGI is coming.

mattclarkdotnet · 20 days ago
What? Nobody says cubic zirconia is an artificial diamond, it’s just a different shiny crystal. We have loads of actual artificial diamonds, so cheap you can get a cutting disc made fr9m them for $10 at home depot.

And nobody working in the space either as ML/AI practitioners, or as philosophers, or as cognitive scientists, even thinks we know what consciousness is, or what is required to create it. So there would be no way to tell if an AI is conscious because we haven’t yet managed to reliably tell if humans, or dogs, or chimpanzees or whales are conscious.

The claim that is often made is that more work on the current generation of AI tech will lead to AGI at a human or better level. I agree with Yann Lecun that this is unlikely.

pllbnk · 20 days ago
Lucky you. I have personally faced some cargo cult-like behavior.
georgeecollins · 20 days ago
"You can see the computer age everywhere but in the productivity statistics "

Robert Solow, Noble Prize winning economist, 1987.

kamaal · 20 days ago
Oh well had a talk with a director at office. He says, instead of using AI to get more productive, people were using AI to get more lazy.

1)

What he means to say is, say you needed to get something done. You could ask AI to write you a Python script which does the job. Next time around you could use the same Python script. But that's not how people are using AI, they basically think of a prompt as the only source of input, and the output of the prompt as the job they want get done.

So instead of reusing the Python script, they basically re-prompt the same problem again and again.

While this gives an initial productivity boost, you now arrive at a new plateau.

2)

Second problem is ideally you must be using the Python script written once and improve it over time. An ever improving Python script over time should do most of your day job.

That's not happening. Instead since re-prompting is common, people are now executing a list of prompts to get complex work done, and then making it a workflow.

So ideally there should be a never ending productivity increase but when you sell a prompt as a product, people use it as a black box to get things done.

A lot of this has to do with lack of automation/programming mindset to begin with.

palmotea · 20 days ago
> "You can see the computer age everywhere but in the productivity statistics "

> Robert Solow, Noble Prize winning economist, 1987.

Some skeptic was wrong in the past, therefore we should disbelieve every skeptic, forever.

That's the argument, right?

rubslopes · 20 days ago
I do believe I'm more productive, but my company is not charging much more for it. I'm working the same hours. Maybe that's the reason.

I just had a meeting yesterday when someone from the customer support team vibe-coded a solution in a few hours. The boss said, "Let's just give this as a gift; this product is not our focus and I want to show them how AI makes us work fast."

Deleted Comment

agnishom · 20 days ago
The most important cost that you didn't mention is the loss of social trust and the harm that will do to social infrastructure.

Junior developers will find it harder to be hired and trained. The case for lesser known artists and musicians is much worse. The scientific literature will be flooded by low quality AI slop with questionable veracity. Drafts of Good debut novels will be harder to find. When someone writes a love song, their romantic partner(s) will have to question if it was LLM generated. Nobody will be able to trust video footage of any kind and will have a much harder time telling what is the truth.

I don't think standard economic indicators are tuned to detect these externalities in the short to medium term.

palmotea · 20 days ago
> The most important cost that you didn't mention is the loss of social trust and the harm that will do to social infrastructure.

This. I think generative AI will mostly generate destruction. Not in the nuking cities sense, but in hollowing out institutions and social bonds, especially the complicated and large-scale kind that have enabled advanced civilization. In many ways, things will revert to a more primitive state: only really knowing people in your local vicinity (no making friends online, because it'll be mostly dead-internet bots out there), only really knowing the news you see yourself, more reliance on rumor and hearsay, removal of the ability for the little guy to challenge and disprove institutional propaganda (e.g. can't start a blog and put up some photos and have people believe your story about what happened), etc.

yunwal · 20 days ago
> Junior developers will find it harder to be hired and trained. The case for lesser known artists and musicians is much worse. The scientific literature will be flooded by low quality AI slop with questionable veracity. Drafts of Good debut novels will be harder to find. When someone writes a love song, their romantic partner(s) will have to question if it was LLM generated. Nobody will be able to trust video footage of any kind and will have a much harder time telling what is the truth.

I think most people will retreat into smaller spaces where they can rely on people to not deceive them. Everyone is moving to discord/group chats now for any sort of trustworthy information. This might be a good thing honestly. It was probably never good that we all got our information from the same place.

Dead Comment

rising-sky · 20 days ago
I guess this is trend now because it's a contrarian / attention grabbing headline. See:

- "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...

- “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...

But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox

  > The term can refer to the more general disconnect between powerful computer technologies and weak productivity growth

XenophileJKO · 20 days ago
I keep seeing the "Productivity Paradox" highlighted over an over again. I think one thing people are missing with this specific technology is that unlike many of the comparisons (computers, internet, broadband, etc), AI in particular doesn't have a high requirement at the consumer side. Everyone already has everything they need to use it.

There will be a period like we are in now where dramatic capability gain (like recent coding gains) take a while for people to adapt to, however, I think the change will be much faster. Even the speed of uptake in coding tools over the last 3 months has been faster than I predicted. I think we'll see other shifts like this in different sectors where it changes almost over a series of a few months.

afavour · 20 days ago
> AI in particular doesn't have a high requirement at the consumer side. Everyone already has everything they need to use it.

That isn’t actually true though, right now everyone has a hard dependency on a cloud service. That is currently sold to them at deep discount by companies that are losing billions.

When the market eventually corrects it’ll be interesting to see how much AI ends up costing. At the very least it will be comparable to the broadband internet connection you mentioned. Possibly a whole lot more.

fallinditch · 20 days ago
At the large insurance company I'm doing some work for the big capability gains have yet to materialize. There are some pockets of workflow innovation but big institutions can carry a kind of inertia and are slow to adapt.

But as the organization slowly learns and adapts I'm sure the capability gains will materialize.

Deleted Comment

sifar · 20 days ago
> AI in particular doesn't have a high requirement at the consumer side

Effective use of these AI tools need high critical thinking skills which are in short supply.

Lalabadie · 20 days ago
I would argue that the leadership and financial support behind AI (in its current form) does not have the patience or level-headedness to treat it as a long-term change, and is very much trying an all-or-nothing approach to making a long shift happen in a few years instead, or burn through nation-level budgets trying.

To my eyes, the problem is not the productivity gain arriving slowly, but the immediate draining of funding from virtually all other areas of innovation.

camillomiller · 20 days ago
This. They created an innovation black hole and we will all pay the long-term consequences of it
slongfield · 20 days ago
This isn't new.

"The Productivity Paradox" is what they called it when people were skeptical that computer would end up finding a place in the office. There are articles from the 90s complaining about how much people are spending on buying computers for no real impact on productivity https://dl.acm.org/doi/10.1145/163298.163309

ej88 · 20 days ago
Even the source article in the first link, https://www.nber.org/papers/w34836

the same firms "predict sizable impacts" over the next three years

late 2025 was an inflection point for a lot of companies

camillomiller · 20 days ago
All of the technologies mentioned eventually made things better. In order to work, gen AI requires a general acceptance of widely spread mostly mediocre outcomes. I don’t see how the comparison stands.
edgyquant · 20 days ago
Seems like it’s an ever shifting goalpost when we are told that tons of layoffs etc are already happening due to the tech and yet when quantified it’s debatable if there’s been any gains at all
surgical_fire · 20 days ago
How to reconcile this with all the narratives of how powerful AI is, how it can perform right now at the same level of engineers and so on?

Once confronted with reality we have a "productivity paradox"?

dodu_ · 20 days ago
I'll take it over seemingly endless deluge of FUD-slop from the past 4 years that claims you better get ready for the AI takeover coming for all the jobs in just-long-enough of a timeline that nobody will remember to hold the author accountable when their prediction is woefully incorrect, where the "advice" in the article is conveniently to pay for more AI tools.
chris_money202 · 20 days ago
I think a pretty good example I had at work, we had the option to buy a software package from a 3rd party company. After reviewing the specs we needed, I told my manager to give me a few hours to see if I could produce what we needed with AI instead. Lo and behold, I was able to do it in just a few hours, AI package was tested, integrated, and we moved on. No where was any of that recorded that I just saved the company lots of money using AI. I bet there are lots of examples like this that just aren't adequately tracked at both micro and macro levels. For some reason we expected to to be able to see these huge gains from AI but we never bothered putting systems in place to observe them.
gpm · 20 days ago
I suspect we are still at the stage where for every story like this there's an offsetting story in the other direction of "I (more commonly reported as my coworker) tried to implement something with AI, messed it up, and ended up wasting a ton of time and resources on that mistake".

It's not that AI can't be useful, but that there's a learning curve, and early in the learning curve we should expect as many resources to be spent learning as resources are saved by using the thing. A macro level view of the economy as a whole sees this as "zero economic growth".

gls2ro · 20 days ago
While personally this excites me: the idea that I can build a custom software that fits that specific problem is quite amazing.

But on company level I see it as a risk: suddently you might have 50 new small apps created by people who might not even work at the company who are not constantly tested for security/privacy ... but more important who once done are not pushing the frontier of how a much better solution might be in that area cause nobody is putting time into them. So as time passes by this has the risk to become legacy software used to run your business. yes of course you can point an AI to all of them and prompt it to make them better but that means focus on that instead of your core business.

Maybe we will see solutions appearing to manage this kind of tech debt.

mrtksn · 20 days ago
I think this is probably going to be the mainstream. Once you are able to define what you need LLMs are able to produce it. If you are able to understand what is delivered, it ends up working as expected.

I needed and embedded document based database, a friend of mine with 30 years experience was vibe coding a database in Rust and I asked him if he can make it support Swift and be embedded in iOS and in few minutes he delivered that using Claude. Then I started vibe coding on it with Codex adding features I wanted and integrating it into my project. It worked as expected. I think it is close to reaching parity with MongDB, years of work vibe coded in a weekend.

There’s going to be fundamental changes in how we program computers and consequently the IT industry.

enraged_camel · 20 days ago
Yes. We needed to do a huge migration project that would otherwise have taken us six months and/or cost more than $100k. With the help of Opus 4.5 we finished it in three weeks for a total token cost of $1200. I posted about it last month.

So if you want to think of it in economic terms, some software consulting firm that would otherwise have made six figures instead did not. The vast majority of the money we would have spent stayed in our pocket. Slight decreases like this in “velocity of money” no doubt add up to significant sums.

bgitarts · 20 days ago
This should show in up in higher margins for the company.

Is your company a software firm or considered something outside of pure software?

buu700 · 20 days ago
GDP is a classic example of Goodhart's law.
dw_arthur · 20 days ago
It should show in decreased revenue for the company you didn't buy the product from. It also should show up at your company either as increased profit margin, increased investment, increase in total employee wages, or increased dividend payout.

If this is happening on a widespread basis in the economy we should see evidence of it sometime this year and that's what investors are anticipating with SaaS stocks.

samrus · 20 days ago
Your point on visibility of the value of avoiding the initial purchase makes sense, but theres something your missing. Theres a cost to maintaining and supporting the software. The cost of that wont be factored in either. It might still end up being a positive value proposition, but that needs to be seen
consumer451 · 20 days ago
> For some reason we expected to to be able to see these huge gains from AI but we never bothered putting systems in place to observe them but we never bothered putting systems in place to observe them.

I am an economic dummie, but wouldn't the metric be revenue per employee?

raddan · 19 days ago
If your company is unwilling to pay for software when writing software, what makes you think other companies or individuals will be willing to pay for yours? Can’t they just vibe code _your_ solution too?
slopinthebag · 20 days ago
What was the software package?
chris_money202 · 20 days ago
simulation model of a hardware component
pylua · 20 days ago
Doesn’t this hurt the economy ?
chris_money202 · 20 days ago
I think it would depend on which company is the more innovative. Is the 3rd party going to use the money we give them to drive further economic growth and innovation? Or is the money saved going to do that. Its a tough call and could go both ways. We need to somehow measure how the pendulum swings with more accuracy and clearer signals.
d_watt · 20 days ago
It took 20 years for computers to "add" to the economy.

https://en.wikipedia.org/wiki/Productivity_paradox

preommr · 20 days ago
I am not saying this to be sarcastic - the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.

It's not good enough to just say oreo ceos say we need to more oreos.

There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.

Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.

ozim · 20 days ago
AI companies don’t have 20 years, they have max 5 years where they have to turn to profit.

They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.

So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.

Terr_ · 20 days ago
It's a "Motte and Bailey" system [0], where the extreme "AI will do everything for you" claim keeps getting thrown around to try to get investors to throw in cash, but then somehow it transmutes into "all technologies took time to mature stop being mean to me."

To be fair, it isn't necessarily the same people doing both at once. Sometimes there are two groups under the same general banner, where one makes the big-claims, and another responds to perceived criticism of their lesser-claim.

[0] https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

flowerthoughts · 20 days ago
> the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years

An even bigger problem is that people listen to them even after they say rationally implausible things. When even Yann LeCunn is putting his arms up and saying "this approach won't work," it's pretty bad.

Dead Comment

yifanl · 20 days ago
The difference being that AI's marketing has been significantly more prevalent than any early computing efforts.
jsheard · 20 days ago
Not to mention the investment is on another level. We've got companies with valuations in the hundred-billions talking about raising trillions to buy all of the computers in the world, before establishing whether they can even turn a profit, nevermind upend the economy.
petcat · 20 days ago
This seems false to me. Commodore and Apple were blitzing every advertising medium and especially TV ads in the early 1980s.
paradox460 · 20 days ago
And early (electronic) computing paid immediate dividends, with the bletchley park code breakers
testbjjl · 20 days ago
More than Apple, on relative scale. Personally I don’t think that.
RigelKentaurus · 20 days ago
For the U.S. economy, productivity is defined as (output measured in $)/(input measured in $). Typically, new technologies (computers, internet, AI) reduce input costs, and due to competition in the market, companies are required to reduce their prices, thereby having an overall deflationary effect on the economy. It's entirely possible that AI will have a small or no effect on productivity as measured above, but society will benefit by getting access to inexpensive products and services powered by inexpensive AI. Individual companies won't use AI to improve their productivity but will need to use AI just to stay competitive.
yowayb · 20 days ago
I think this paragraph from the wikipedia article captures it nicely:

>Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.

pier25 · 20 days ago
Sure but the issue with AI is results vs money burned.

Deleted Comment

kakapo5672 · 20 days ago
Yep, and the same with the internet. During the 1990s and 2000s, people kept wondering why the internet wasn't showing up in productivity numbers. Many asked if the internet was therefore just a fad or bubble. Same as some now do with AI.

It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.

rainsford · 20 days ago
Sure, but you have to consider Carl Sagan's point, "The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." Some truly useful technologies start out slow and the question is asked if they are fads or bubbles even though they end up having huge impact. But plenty of things that at first appeared to be fads or bubbles truly were fads or bubbles.

Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.

recursive · 20 days ago
Also no particular reason to group it in with those two. There are plenty of things that never showed up at all. It's just not a signal It's kind of like "My kid is failing math, but he's just bored. Einstein failed a lot too you know". Regardless of whether Einstein actually failed anything, there are a lot more non-Einsteins that have failed.
sillyfluke · 20 days ago
It didn't take mobile apps with the launch of the iPhone 20 years to add to the economy though, did it?

Dead Comment

Dead Comment

mirekrusin · 20 days ago
This article seems to have "basically zero" content.

Today you have to be blind to not see the change that is coming.

World has its own (massive) inertia, burocracy present in businesses accounting for a big part in it.

AI itself is moving fast but not at infinite speeds. We start to have good enough tooling but it's not yet available to everyone and it still hangs on too many hacks that will need to crystalize. People have a lot of mess to sort out in their projects to start taking full advantage of AI tooling - in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it and "using ai" doesn't mean that "you can copy paste code to/from copilot 365".

As people say - something changed around Dec/Jan. We're only now going to start seeing noticable changes and changes themselves will start speeding up as well. But it all takes time.

sjaiisba · 20 days ago
> As people say - something changed around Dec/Jan

Yes, Anthropic decided they wanted to IPO and got the hype machine in full swing.

Don’t get me wrong LLMs are here to stay but how we’re currently using them is likely going to change a lot. Stuff like this:

> in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it

Is not needed to get a lot out of AI, and is mostly snake oil. Integrating them with actionable feedback is, but that takes a lot of time and rethinking of some existing systems.

I don’t like the Internet analogy cause that’s like producing a new raw material, but AI is gonna be like Excel eventually (one of the most important pieces of software in the world).

mirekrusin · 18 days ago
It's not Dec/Jan is not just anthropic, tooling clicked (ie. vscode copilot became great, no need for CC really but CC for VSCode also stopped sucking), great models were released with really good tool use, better coding and sucking less and people had holiday break at scale to actually have time to play with it – it all clicked for a lot of people around that period.
ipaddr · 20 days ago
Nothing changed in Dec/Jan. Everything changed in 2023 with someones first openAI chat and things are slowly getting adopted into everything with high, marginal and negative benefits.

Things are actually slowing down. And society will still see AI adding little to next years report. The costs still outweigh the benefits.

dvt · 20 days ago
We're still 6-12+ months away from a "killer" AI product. OpenClaw showed what's possible-ish, but it breaks half the time, eats tokens like crazy, and can leak all kinds of secrets. Clearly there's potential there, and a lot of people are working on products in the AI space (myself included), but anyone that's seriously tried to wrangle these models will agree with the reality that it's very hard to reliably get them to do what you want them to do.
DrewADesign · 20 days ago
> We're still 6-12+ months away from a "killer" AI product. OpenClaw showed what's possible-ish, but it breaks half the time, eats tokens like crazy, and can leak all kinds of secrets.

If you replace OpenClaw with any number of other hot LLM products/projects, I’ve been hearing that same exact sentiment for numerous 6-to-12-month periods. I’d argue we have no idea how long it’s doing to be, but it’s probably not very soon.

slopinthebag · 20 days ago
We're only 5 years away from fusion energy!
burgerone · 20 days ago
It's not that the technology is not there yer, it's all the ethical concerns and the mental barrier that nobody wants to spend their day begging AI for solutions.
geraneum · 20 days ago
> This article seems to have "basically zero" content.

Why? It’s descriptive of the “past”. While you’re trying to predict the near/far “future” and project your assumptions. Two different things.

staplers · 20 days ago

  the change that is coming.
Everything you argue reinforces that net output was still basically zero last year. I don't see them talking about 2026 data..

gaigalas · 20 days ago
Change is always coming. It's cute when someone thinks this time it's going to be special.
__loam · 20 days ago
Can't even spell bureaucracy while you're making big predictions like this.
mirekrusin · 20 days ago
Now you know it wasn't written by a bot.
pluto_modadic · 20 days ago
Why do I have a feeling that this will be ignored as biased by the people who need to read it the most.
brokencode · 20 days ago
Why do I get the feeling that AI skeptics will treat it as definitive and irrefutable proof that they were right all along even though it’s one data point in an industry that’s hasn’t even been around for 5 years.
albatross79 · 20 days ago
You're right, it is tempting to dunk on AI boosters every time an article like this comes out and puts a damper on their sci fi fan fiction fantasies. There's just something about a grown person getting all excited like a child that makes it really satisfying.
ohyoutravel · 20 days ago
It’s a grift being perpetuated by the folks at the top, who then sweep along in their slipstream folks under them, and so on. The folks who “need to hear this” are helpless to go against and so can’t back down, and the folks who don’t need to hear this because they’re driving it have their paychecks aligned to it, so they’re not backing down.

Dead Comment

areoform · 20 days ago
I think analyses like these are motivated reasoning. In 2000, I'm sure you could have said that after infrastructure costs the internet, and the web, added "basically zero" to US economic growth. And there were people saying that!

Someone I deeply respect, Clifford Stollm wrote a book called “Silicon Snake Oil — Second thoughts on the information highway" in 1995. And while he was and is a brilliant person, Stoll was wrong.

Smart people are terrible at predicting the most consequential changes in our future – even when they're familiar with the technology. I wrote a bit about my thesis why here, https://1517.substack.com/p/inside-v-outside-context-problem...

Don't make his mistake. Don't look away from the change being wrought. The world has changed and our history now has a new, sharp dividing chapter "Before ChatGPT | After ChatGPT"

and that chapter will go down right next to "Before Trinity | After Trinity"; "Before PC | After PC"; "Before 'Internet' | After 'Internet'"†

† Yes, I know I'm referring to the Web. But we're still using the dark fiber from the .com boom.

HardCodedBias · 20 days ago
I think this is key:

"On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth."

No doubt people are using it work ( https://www.gallup.com/workplace/701195/frequent-workplace-c... ) the question is how much productivity results and to whom does it accrue.

Partially this is AI capability (both today and in the past), partially this is people taking time to change their tools.