Readit News logoReadit News
lmf4lol · 4 months ago
The number of use cases for which I use AI is actually rapidly decreasing. I don't use it anymore for coding, I don't use it anymore for writing, I don't use it anymore for talking about philosophy, etc. And I use 0 agents. even though I am (was) the author of multiple MCP servers. It's just all too brittle and too annoying. I feel exhausted when talking to much to those "things".... I am also so bored of all those crap papers being published about LLM. Sometimes, there are some gems but its all so low-effort. LLM papers bore the hell out of me...

Anyway, By cutting out AI for most of my stuff, I really improved my well-being. I found the joy back in manual programming, because I am one of the few soon that will actually understand stuff :-). I found the joy in writing with a fountain pen in a notebook and since then, I retain so much more information. Also a great opportunity for the future, when the majority will be dumbed down even more. And for philosophical interaction. I joined an online University and just read the actual books of the great thinkers and discuss them with people and knowledgable teachers.

For what I use AI still is to correct my sentences (sometimes) :-).

It's kinda the same than when I cut all(!) Social Media a while ago. It was such a great feeling to finally get rid ot all those mind-screwing algorithms.

I don't blame anyone if they use AI. Do what you like.

raincole · 4 months ago
> Typewriters and printing presses take away some, but your robot would deprive us of all. Your robot takes over the galleys. Soon it, or other robots, would take over the original writing, the searching of the sources, the checking and crosschecking of passages, perhaps even the deduction of conclusions. What would that leave the scholar? One thing only, the barren decisions concerning what orders to give the robot next!

From Issac Asimov. Something I have been contemplating a lot lately.

ToucanLoucan · 4 months ago
I technically use it for programming, though really for two broad things:

* Sorting. I have never been able to get my head around sorting arrays, especially in the Swift syntax. Generating them is awesome.

* Extensions/Categories in Swift/Objective C. "Write me an extension to the String class that will accept an array of Int8s as an argument, and include safety checks." Beautiful.

That said I don't know why you'd use it for anything more. Sometimes I'll have it generate like, the skeleton of something I'm working on, a view controller with X number of outlets of Y type, with so and so functions stubbed in, but even that's going down because as I build I realize my initial idea can be improved.

malkia · 4 months ago
I've been using LLMs as calculators for words, like they can summarize, spot, correct, but often can be wrong about this - especially when I have to touch language I haven't used in a while (Python, Powershell, Rust as recent examples), or sub-system (SuperPrefetch on WIndows, Or why audio is dropping on coworker's machines when they run some of the tools, and like this... don't ask me why), and all kinds of obscure subjects (where I'm sure experts exists, but when you need them they are not easy (as in "nearby") to reach for, and even then might not help)

But now my grain of salt has increased - it's still helpful, but much like a real calculator - there is limit (in precision), and what it can do.

For one it still can't make good jokes :) (my litmus test)

fpauser · 4 months ago
This is also my experience with (so called) AI. Coding with AI feels like working with a dumb colleague that constantly forgets. It feels so much better to manually write code.
ciconia · 4 months ago
> I don't use it anymore for coding

I'm curious, can you expand on this? Why did you start using coding agents, and why did you stop?

lmf4lol · 4 months ago
I started to code with them when Cursor came out. I've built multiple projects with Claude and thought that this is the freaking future. Until all joy disappeared and I began to hate the whole process. I felt like I didn't do anything meaningful anymore, just telling a stupid machine what I want and let it produce very ugly output. So a few months, I just stopped. I went back to VIM even....

I am pretty idealistic coder, who always thought of it as an art in itself. And using LLMs robbed me of the artistic aspect of actually creating something. The process of creating is what I love and like and what gives me inspiration and energy to actually do it. When a machine robs me of that, why would I continue to do it? Money then being the only answer... A dreadful existence.

I am not a Marxist, probably bceause I don't really understand him, but I think LLM is "detachment of work" applied to coders IMHO. Someone should really do a phenomenological study on the "Dasein" of a coder with LLM.

Funnily, I don't see any difference in productivity at all. I have my own company and I still manage to get everything done on deadline.

noodletheworld · 4 months ago
Skill declines over time, without practice.

If you speak fluent japanese, and you dont practice, you will remember being fluent but no longer actually be able to speak fluently.

Its true for many things; writing code is not like riding a bike.

You cant not write code for a year and then come back at the same skill level.

Using an agent is not writing code; but using an agent effectively requires that you have the skill of writing code.

So, after using a tool that automatically writes code for you, that you probably give some superficial review to, you will find, over time, that you are worse at coding.

You can sigh and shake your head and stamp your feet and disagree, but its flat out a fact of life:

If you dont practice, you lose skill.

I, personally found, this happening, so I now do 50/50 time: 1 week with AI, 1 week with strictly no AI.

If the no AI week “feels hard” then I extend it for another week, to make sure I retain the skills I feel I should have.

Anecdotally, here at $corp, I see people struggling because they are offloading the “make an initial plan to do x that I can review” step too much, and losing the ability to plan software effectively.

Dont be that guy.

If you offload all your responsibilities to an agent and sit playing with your phone, you are making yourself entirely replacable.

estebarb · 4 months ago
I cannot talk for OP, but I have been researching ways to make ML models learn faster, which obviously is a path that will be full of funny failures. I'm not able to use ChatGPT or Gemini to edit my code, because they will just replace my formulas with SimCLR and call it done.
redwood · 4 months ago
I liken it to a drug that feels good over the near term but has longer term impacts.. sometimes you have to get things out of your system. It's fun while it lasts and then the novelty wears off. (And just as some people have the tolerance to do drugs for much longer periods of time than others, I think the same is the case for AI)
senko · 4 months ago
It sounds like you went in deep for a while, and then rebounded. Good for you (no sarcasm, I mean it).

We should all find little joys in our life and avoid things that deaden us. If AI is that for you, I'd say you made a good decision.

ryandv · 4 months ago
I commend you for your choices. This is the way in the 2020s.
stuaxo · 4 months ago
I use it for a lot of stuff, but ultimately redo almost all of it - which I think is right.

The LLM is the mush of everyone's stuff like the juice at the bottom of the bin is a mix of all the restaurants food.

The writing out the other end of the LLM is bland.

What it IS useful for is seeing a wrong thing and then going and making my own.

I still use it for various little scripts and menial tasks.

The push for this stuff to replace creativity is disgusting.

Sticking LLMs in every place is just crap, I've had enough.

dinvlad · 4 months ago
This is the best take
smt88 · 4 months ago
No one uses agents. They're a myth that Marc Benioff willed into existence. No one who regularly uses LLMs would ever trust one to do unattended work.
Ferret7446 · 4 months ago
You managed to move the goalposts in two sentences; if you realized that your first claim is wrong you probably should have rewrote it rather than try to save it at the end.
seanmcdirmid · 4 months ago
The economics of the force multiplier is too high to ignore, and I’m guessing an SWEs who don’t learn how to use it consistently and effectively will be out of the job market in 5 or so years.
kibwen · 4 months ago
Back in the early 2000s the sentiment was that IDEs were a force multiplier that was too high to ignore, and that anyone not using something akin to Visual Studio or Eclipse would be out of a job in 5 or so years. Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.
data-ottawa · 4 months ago
I’m sceptical

The models seem to still (claude opus 4.5) not get things right, and miss edge cases, and work code in a way that’s not very structured.

I use them daily, but I often have to rewrite a lot to reshape the codebase to a point where it makes sense to use the model again.

I’m sure they’ll continue to get better, but out of a job better in 5 years? I’m not betting on it.

scuff3d · 4 months ago
They'll be more employable, not less. Since they're the only ones who will be able to fix the huge mess left behind by the people relying on them.
kranke155 · 4 months ago
It’s the opposite. The more you know to do without them the more employable you are. AI has no learning curve, not at the current level of complexity anyway. So anyone can pick it up in 5 years and if you’ve used it less your brain is better.
risyachka · 4 months ago
There is nothing to learn, the entry barrier is zero. Any SWE can just start using it when they really need to.
beefnugs · 4 months ago
Good. The smartest and best should be cutting out middlemen and selling something of their own instead of keep shoveling all the money up the company pyramids. I think the pyramids will become easier and easier to spot their trash and avoid
rs186 · 4 months ago
> ... an SWEs who don’t learn how to use it consistently ...

an SWE does not necessarily need to "learn" Claude Code any more than someone who does not know programming at all to be able to use the tool effectively. What actually matters is that they know how things should be done without coding assistants, they understand what the tools may be doing, and then give directions/correct mistakes/review code.

In fact, I'd argue tools should be simple and intuitive for any engineer to quickly pick up. If an engineer who has solid background in programming but with no prior experience with the tools cannot be productive with such a tool after an hour, it is the tool that failed us.

You don't see people talk about "prompt engineering" as much these days, because that simply isn't so important any more. Any good tool should understand your request like another human does.

fpauser · 4 months ago
Don't think so.
ajkjk · 4 months ago
Adoption = number of users

Adoption rate = first derivative

Flattening adoption rate = the second derivative is negative

Starting to flatten = the third derivative is negative

I don't think anyone cares what the third derivative of something is when the first derivative could easily change by a macroscopic amount overnight.

postexitus · 4 months ago
Adoption rate is not derivative of Adoption. Rate of change is. Adoption rate is the percentage of uptake (there, same order with Adoption itself). It being flattening means first derivative is getting close to 0.
ajkjk · 4 months ago
I agree, I think I misunderstood their wording.

In which case it's at least funny, but maybe subtract one from all my derivatives.. Which kills my point also. Dang.

brianshaler · 4 months ago
It maps pretty cleanly to the well understood derivatives of a position vector. Position (user count), velocity (first derivative, change in user count over time), acceleration (second derivative, speeding up or flattening of the velocity), and jerk (third derivative, change in acceleration such as the shift between from acceleration to deceleration)

It really is a beautiful title.

amelius · 4 months ago
The function log(x) also has derivative that goes closer and closer to 0.

However lim x->inf log(x) is still inf.

dragonwriter · 4 months ago
> Adoption = number of users

> Adoption rate = first derivative

If you mean with respect to time, wrong. The denonimator in adoption rate that makes it a “rate” is the number of existing businesses, not time. It is adoption scaled to the universe of businesses, not the rate of change of adoption over time.

LPisGood · 4 months ago
The adoption rate is the rate of adoption over time.
silveraxe93 · 4 months ago
While there's an extreme amount of hype around AI, it seems there's an equal amount of demand for signs that it's a bubble or it's slowing down.
emp17344 · 4 months ago
Well, that’s only because it exhibits all the signs of a bubble. It’s not exactly a grand conspiracy.
kordlessagain · 4 months ago
You could use that logic to dismiss any analysis of any trajectory ever.

Perfectly excusable post that says absolutely nothing about anything.

crote · 4 months ago
Looking at the graphs in the linked article, a more accurate title would probably be "AI adoption has stagnated" - which a lot of people are going to care about.

Corporate AI adoption looks to be hitting a plateau, and adoption in large companies is even shrinking. The only market still showing growth is companies with fewer than 5 employees - and even there it's only linear growth.

Considering our economy is pumping billions into the AI industry, that's pretty bad news. If the industry isn't rapidly growing, why are they building all those data centers? Are they just setting money on fire in a desperate attempt to keep their share price from plummeting?

prmph · 4 months ago
When all the dust settles, I think it's probably going to be the biggest bubble ever. The unjustified hype is unbelievable.

For some reason I can't even get Claude Code (Running GLM 4.6) to do the simplest of tasks today without feeling like I want to tear my hair out, whereas it used to be pretty good before.

They are all struggling mightily with the economics, and I suspect after each big announcement of a new improved model x.y.z where they demo shiny so called advancement, all the major AI companies heavily throttle their models in use to save a buck.

At this point I'm seriously considering biting the bullet and avoiding all use of AI for coding, except for research and exploring codebases.

First it was Bitcoin, and now this, careening from one hyper-bubble to a worse one.

tarsinge · 4 months ago
I don’t understand, how can adoption rate change overnight if its derivative is negative? Trying to draw a parallel to get intuition, if adoption is distance, adoption rate speed, and the derivative of adoption rate is acceleration, then if I was pedal to the floor but then release the pedal and start braking, I’ll not lose the distance gained (adoption) but my acceleration will flatten then get negative and my speed (adoption rate) will ultimately get to 0 right? Seems pretty significant for an industry built on 2030 projections.
ajkjk · 4 months ago
One announcement from a company or government can suddenly change the derivative discontinuously.

Derivatives irl do not follow the rules of calculus that you learn in class because they don't have to be continuous. (you could quibble that if you zoom in enough it can be regarded as continuous.. But you don't gain anything from doing that, it really does behave discontinuous)

didgeoridoo · 4 months ago
Yeah, what a jerk.
voxleone · 4 months ago
You win today.
felipellrocha · 4 months ago
Hehehehehheeh
benatkin · 4 months ago
I think it might be answering long-term questions about direct chat use of AIs. Of course as AI goes through its macroscopic changes the amount it gets used for each person will increase, however some will continue to avoid using AI directly, just like I don't fully use GPS navigation but I benefit from it whether I like it or not when others are transporting me or delivering things to me.
scotty79 · 4 months ago
Not really. In this context adoption might be number of users. But adoption rate is a fraction of users that adopted this to all users.
ajkjk · 4 months ago
Hm that's true. Both seem plausible in English. I didn't look closely enough to figure out which they meant.
simonw · 4 months ago
Apollo published a similar chart in September 2025: https://www.apolloacademy.com/ai-adoption-rate-trending-down... - their headline for that one was "AI Adoption Rate Trending Down for Large Companies".

I had fun with that one getting GPT-5 and ChatGPT Code Interpreter to recreate it from a screenshot of the chart and some uploaded census data: https://simonwillison.net/2025/Sep/9/apollo-ai-adoption/

Then I repeated the same experiment with Claude Sonnet 4.5 after Anthropic released their own code interpreter style tool later on that same day: https://simonwillison.net/2025/Sep/9/claude-code-interpreter...

par · 4 months ago
As an early and enthusiastic adopter of ChatGPT, LLMs, GANs etc, I gotta say: my ChatGPT is wrong a LOT. At first, somehow, it was tolerable. But now the hallucinations are getting very annoying and no longer quirky or funny, they’re frustrating and I have little patience for it.
ares623 · 4 months ago
It’s ok a second LLM will do double checks
emp17344 · 4 months ago
My guess is AI will find niches where it provides productivity boosts, but won’t be as useful in the majority of fields. Right now, AI works pretty well for coding, and doesn’t really excel anywhere else. It’s not looking like it will get good enough to disrupt the economy at large.
mwkaufma · 4 months ago
Aside from financially-motivated "testimonials," there's no broad evidence that it even works that well for coding, with many studies even showing the opposite. Damning with faint praise.
data-ottawa · 4 months ago
It depends on a lot of things.

I know JavaScript on a pretty surface level, but I can use Claude to wire up react and tailwind, and then my experience with all the other programming I’ve done gives me enough intuition to clean it up. That helps me turn rough things into usable tools that can be reused or deployed in small scale.

That’s a productivity increase for sure.

It has not helped me with the problems that I need to spend 2-5 days just thinking about and wrapping my head around solutions to. Even if it does come up with solutions that pass tests, they still need to be scrutinized and rewritten.

But the small tasks it’s good at add up to being worth the price tag for a subscription.

turtletontine · 4 months ago
I think what’s clear is many people feel much more productive coding with LLMs, but perceived and actual productivity don’t necessarily correlate. I’m sure results vary quite a bit.

My hunch is that long term value might be quite low: a few years into vibe coding huge projects, developers might hit a wall with a mountain of slop code they can no longer manage or understand. There was an article here recently titled “vibe code is legacy code” which made a similar argument. Again, results surely vary wildly

thesumofall · 4 months ago
They show two different surveys that are supposed to show the same underlying truth but differ by a factor of 3x? For the Ramp survey: why the sudden jump from 30% to 50% in March? For the Census one: How could it possibly be that only 12% of companies with more than 250 people „adopted“ (whatever that means) AI? It would be interesting if it were true but these charts don’t make any sense at all to me
tripletao · 4 months ago
The Census Bureau asks if firms are using AI "to help produce goods or services". I guess that's intended to exclude not-yet-productive investigations, and maybe also indirect uses--does LLM-powered OCR for the expense reports for the travelling sales representatives for a widget factory count? That's all vague enough that I guess it works mostly as a sentiment check, where the absolute value isn't meaningful but the time trend might be.

The Ramp chart seems to use actual payment information from companies using their accounting platform. That should be more objective, though they don't disclose much about their methodology (and their customers aren't necessarily representative, the purpose and intensity of use aren't captured at all, etc.).

https://ramp.com/data/ai-index

ac29 · 4 months ago
> The Census Bureau asks if firms are using AI "to help produce goods or services"

That's odd. I use AI tools at work occasionally, but since our business involves selling physical goods, I guess we would not count as an AI adopter in this survey.

malisper · 4 months ago
From the chart, the percentage of companies using AI has been going down over the past couple of months

That's a massive deal because the AI companies today are valued on the assumption that they'll 10x their revenue over the next couple of years. If their revenue growth starts to slow down, their valuations will change to reflect that

adventured · 4 months ago
This bubble phase will play out just as the previous have in tech: consolidation, most of the value creation will go to a small group of companies. Most will die, some will thrive.

Companies like Anthropic will not survive as an independent. They won't come close to having enough revenue & profit to sustain their operating costs (they're Lyft to Google or OpenAI's Uber, Anthropic will never reach the scale needed to roll over to significant profit generation). Its fair value is 1/10th or less what it's being valued at currently (yes because I say so). Anthropic's valuation will implode to reconcile that, as the market for AI does. Some larger company will scoop them up during the pain phase, once they get desperate enough to sell. When the implosion of the speculative hype is done, the real value creation will begin thereafter. Over the following two or three decades a radical amount of value will be generated by AI collectively, far beyond anything seen during this hype phase. A lot of lesser AI companies will follow the same path as Anthropic.

chrismorgan · 4 months ago
Given the charts, that’s a ridiculous claim. Just compare early 2024 in the first chart, for example.

It’s way too early to decide whether it’s flattening out.

malisper · 4 months ago
Three consecutive months of decline starts to look more like a trend. Unless you think there's a transient issue causing the decline, something fundamental has changed
chrismorgan · 4 months ago
Again: compare early 2024. And that’s not the only thing; the second chart shows a possible flattening, but by no means certain yet, especially not when taken with the clear March–April jump; and the first chart shows no dwindling in 1–4, and clear recovery in 250+. The lie is easily put to the claim the article makes:

> Data from the Census Bureau and Ramp shows that AI adoption rates are starting to flatten out across all firm sizes, see charts below.

It’s flat-out nonsense, and anyone with any experience in this kind of statistics can see it.

raincole · 4 months ago
It's just printing headlines out of nothing. If it tried to answer why the two graphs show such different numbers (one ~14%, the other ~55%) I'd be more interested.

> Note: Data is six-survey moving average. The survey is conducted bi-weekly. Sources: US Census Bureau, Macrobond, Apollo Chief Economist

> Note: Ramp Al Index measures the adoption rate of artificial intelligence products and services among American businesses. The sample includes more than 40,000 American businesses and billions of dollars in corporate spend using data from Ramp’s corporate card and bill pay platform. Sources: Ramp, Bloomberg, Macrobond, Apollo Chief Economist

It seems that the real interesting thing to see here is that the companies using Ramp are extremely atypical.

scotty79 · 4 months ago
Especially interesting is the adoption by the smallest companies. This means people find it still increasingly useful at the grassroot level where things are actually done.

At larger companies adoption will probably stop at the level where managers will start to be threatened.

crote · 4 months ago
But what does that grassroot adoption look like in practice? Is that a developer spending $250/month on Claude, or is it a local corner shop using it once a month to replace their clip art flyer with AI slop, and the example contract they previously found via Google with some legalese gobbledygook ChatGPT hallucinated?

Giving AI away for free to people who don't give a rat's ass about the quality of its output isn't very difficult. But that's not exactly going to pay your datacenter bill...