Investors don't "expect" AI to soar, they NEED AI to soar. Why are we still engaging in this absolutely ridiculous kabuki theatre? This entire cracking edifice is propped up by fictitious capital and pipe dreams, and the music is about to stop. Turns out, you can't charge $20 a month for something no one wants and expect to get a trillion dollars out of it. Shocker!
A popular belief these days is that investors from 2000 ultimately got it right. Truth - they simply got it wrong. They dumped tons of money into things that had no hope of justifying an ROI. They thought adoption of the technology would happen at a pace that was unprecedented or even possible. They assumed things would happen in 3 years that actually took 20. Yes - Shocker!
> A survey by Dayforce, a software firm, finds that while 87% of executives use ai on the job, just 57% of managers and 27% of employees do. Perhaps middle managers set up ai initiatives to satisfy their superiors’ demands, only to wind them down quietly at a later date.
The article quietly ignored two better explanations: the day to day work of executives can be automated more easily (Manna vibes) and/or the execs have a vested interest in AI succeeding so they can cut headcount so they are evangelists for AI.
There is a big compliance issue as well, in many corporations, AI is strictly forbidden so employees will claim they do not use AI at all, but they do.
Medical doctors as well, officially 0%, reality ?
Also many programmers hide the truth, because it is quite difficult to justify their salary (that was priced from the pre-AI times when programming was much more difficult).
As soon as every big corp started stuffing their UIs with AI buttons, we all knew it was investors pushing for AI use to go sky high without a care for the nuances of the current state of AI. The reality is that AI usage isn't as impactful as it was promised. Where is the productivity increase in being able to generate a picture via some prompt? When deep research could contain hallucinated text or references, where is the productivity increase? It is undeniable that these tools have uses but when you look at all the investment made into this tech, the outcomes are not great.
Example: new Yahoo! Mail AI summaries helpfully added to the top of each mail. Thanks, now I get to read each email twice! With the original text now placed in a variable location on the screen.
Unfortunately it's the coders who are most excited to put themselves out of business with incredible code-generation facilities. The techies that remain employed will be the feature vibers with 6-figure salaries supplied by the efforts of the now-unemployed programmers. The cycle will thus continue.
What is the definition of "soaring"? The charts in the article showed that the percentage of the companies that adopt AI for automation has increase 3X. At least 40% of the companies pay for GenAI, and at least 10% of the employees use GenAI daily. Combined with the fact that the companies like OpenAI and Anthronpic frequently run out of capacity, how is the AI use not soaring?
- If microsoft bundles copilot to their standard office product, you become a company that pays for AI even if you didn't opt in
- Accidentally tapping the AI mode on the Google search will count as an AI search. DDG doesn't even wait for you to tap and triggers an AI response. Still counts as AI use even if you didn't mean to use
- OpenAI, Google and Microsoft have been advertising heavily (usage will naturally go up)
- Scammers using GenAI to scam increases AI usage and GenAI is GREAT for scammers
- Using AI after a meeting to get a summary is nice but not to enough to make a visible impact in a company output. Most AI usages fall in this bucket
This tech was sold as a civilisation defining. Not GPT-X but the GPT that is out now. Tech that was "ready to join the workforce" while the reality is that these tools are not reliable in the sense he implied. They are not "workers" and won't change the output of your average company in any significant way.
Sweet talking investors is easy, but walking the talk is another thing altogether. Your average business has no interest or time in supervising a worker that at random times behaves unpredictably and doesn't learn not to make mistakes when told off.
Those two sets of facts can be true at the same time.
40% of companies and 10% of employees can be using AI daily, but just for a small amount of tasks, and that usage can be leveling off.
At the same time, AI can be so inefficient that servicing this small amount of usage is running providers out of capacity.
This is a bad combination because it points to the economic instability of the current system. There isn't enough value to drive higher usage and/or higher prices and even if there was, the current costs are exponentially higher.
There was a dip on the first chart in the article, it also sbows something like 9% of companies using it.
What I wonder is beyond "using" AI, is what value the companies are actually seeing. Revenue growth at both OpenAI and Anthropic are increasing rapidly at the moment, but it's not clear if individual companies are really growing their useage, or if it is everyone starting to try it out.
Personally, I have used it sparingly at work, as the lack of memory seems to make it quite difficult to use for most of my coding tasks. I see other people spending hours or even days trying to craft sub-agents and prompts, but not delivering much, if any, output above average. Any output that looks correct, but really isn't cause a number of headaches.
For the VC's, one issue is constant increase in compute. Currently it looks to me like every new release is only slightly better, but the compute and training costs increase at the same rate. The AI companies need the end users to need their product so much they can significantly increase the price to the end users. I think this is what they want to see in "adoption", such a high demand that they can see the future of increasing prices.
I don't want to be all "did you read the article?" since that's against guidelines, but the text of the article (the stuff in between the graphics and ads) is kind of about exactly that.
Adoption was widespread at first but seems to have hit a ceiling and stayed there for a while now. Meanwhile, there's been little evidence of major changes to net productivity or profitability where AI has been piloted. Nobody is pulling away with radical growth/efficiency for having adopted AI, and in fact the entire market of actual goods and services is mostly still just stagnating outside of the speculative investment being poured into AI itself.
Investment isn't just about making a bet on whether an company/industry will go up or down, but about making the right bet about how much it will do so over what period of time. The scale of AI investment over the last few years was making the bet that AI adoption would keep growing very very fast and would revolutionize the productivity and profitability of the firms that integrated it. That's not happening yet, which suggests the bet may have been too big or too fast, leaving a lot of investors in an increasingly uncomfortable position.
I get confused about the word "adoption". By adoption is it meant that a company tried to use AI, determined it useful and continues to use it. Just trying something out is not adoption in my mind. Companies try and abandon things all the time.
It has been my experience that technology has to perform significantly better than people do before it gets massively adopted. Self driving cars come to mind. Tesla has self driving that almost works everywhere but Waymo has self driving that really works in certain areas. Adoption rates for consumers has been much higher with Waymo (I was surrounded by 4 yesterday) and they are expanding rather rapidly. I have yet to see a self driving Tesla.
Companies are shoving AI into everything and making it intrusive into everyone's workflow. Thus they can show how "adoption" is increasing!
But adoption and engagement don't equal productive, useful results. In my experience it simply doesn't and the bottom is going to fall out on all these adoption metrics when people see the productivity gains aren't real.
The only place I've seen real utility is for coding. All other tasks, such as Gemini for document writing, produces something that's about 80% ok, and 20% errors and garbage. The work of going back through with a fine toothed comb to root out the garbage is actually more work and less productive than any simply writing the darn thing from scratch.
I fear that the future of AI driven productivity is going to push a mountain of shoddy work into the mainstream. Imagine if the loan documents for your new car had all the qualities of a spam email. It's going to be a nightmare for the administrative world to untangle what is real from the AI slop.
A lot of people who think AI is being used heavily, are coders. It's like a blacksmith making a hammer for himself and thinking that everyone is using the hammer everyday all the time.
Let's check agentic AI. Which agents do people mostly talk about? Aha - coding agents!
A lot of the non coding use is large in quantity but low in usefulness like the AI summaries that Google sticks in my searches. I actually quite like them but doubt I would use them much if I had to do something like click a button to make them appear, let alone pay.
I like how the framing of the article assumes that AI is a revolutionary technology that everyone should be using and the adoption is just mysteriously slow. This was particularly funny:
> In recent earnings calls, nearly two-thirds of executives at S&P 500 companies mentioned AI. At the same time, the people actually responsible for implementing AI may not be as forward-thinking, perhaps because they are worried about the tech putting them out of a job.
Ah, those brave, forward-looking executives with their finger on the pulse of the future while their employees are just needlessly stalling adoption. Completely absent from the article is the possibility that the technology is not as revolutionary as claimed.
A popular belief these days is that investors from 2000 ultimately got it right. Truth - they simply got it wrong. They dumped tons of money into things that had no hope of justifying an ROI. They thought adoption of the technology would happen at a pace that was unprecedented or even possible. They assumed things would happen in 3 years that actually took 20. Yes - Shocker!
The article quietly ignored two better explanations: the day to day work of executives can be automated more easily (Manna vibes) and/or the execs have a vested interest in AI succeeding so they can cut headcount so they are evangelists for AI.
Medical doctors as well, officially 0%, reality ?
Also many programmers hide the truth, because it is quite difficult to justify their salary (that was priced from the pre-AI times when programming was much more difficult).
Unfortunately it's the coders who are most excited to put themselves out of business with incredible code-generation facilities. The techies that remain employed will be the feature vibers with 6-figure salaries supplied by the efforts of the now-unemployed programmers. The cycle will thus continue.
- Accidentally tapping the AI mode on the Google search will count as an AI search. DDG doesn't even wait for you to tap and triggers an AI response. Still counts as AI use even if you didn't mean to use
- OpenAI, Google and Microsoft have been advertising heavily (usage will naturally go up)
- Scammers using GenAI to scam increases AI usage and GenAI is GREAT for scammers
- Using AI after a meeting to get a summary is nice but not to enough to make a visible impact in a company output. Most AI usages fall in this bucket
This tech was sold as a civilisation defining. Not GPT-X but the GPT that is out now. Tech that was "ready to join the workforce" while the reality is that these tools are not reliable in the sense he implied. They are not "workers" and won't change the output of your average company in any significant way.
Sweet talking investors is easy, but walking the talk is another thing altogether. Your average business has no interest or time in supervising a worker that at random times behaves unpredictably and doesn't learn not to make mistakes when told off.
40% of companies and 10% of employees can be using AI daily, but just for a small amount of tasks, and that usage can be leveling off.
At the same time, AI can be so inefficient that servicing this small amount of usage is running providers out of capacity.
This is a bad combination because it points to the economic instability of the current system. There isn't enough value to drive higher usage and/or higher prices and even if there was, the current costs are exponentially higher.
What I wonder is beyond "using" AI, is what value the companies are actually seeing. Revenue growth at both OpenAI and Anthropic are increasing rapidly at the moment, but it's not clear if individual companies are really growing their useage, or if it is everyone starting to try it out.
Personally, I have used it sparingly at work, as the lack of memory seems to make it quite difficult to use for most of my coding tasks. I see other people spending hours or even days trying to craft sub-agents and prompts, but not delivering much, if any, output above average. Any output that looks correct, but really isn't cause a number of headaches.
For the VC's, one issue is constant increase in compute. Currently it looks to me like every new release is only slightly better, but the compute and training costs increase at the same rate. The AI companies need the end users to need their product so much they can significantly increase the price to the end users. I think this is what they want to see in "adoption", such a high demand that they can see the future of increasing prices.
Adoption was widespread at first but seems to have hit a ceiling and stayed there for a while now. Meanwhile, there's been little evidence of major changes to net productivity or profitability where AI has been piloted. Nobody is pulling away with radical growth/efficiency for having adopted AI, and in fact the entire market of actual goods and services is mostly still just stagnating outside of the speculative investment being poured into AI itself.
Investment isn't just about making a bet on whether an company/industry will go up or down, but about making the right bet about how much it will do so over what period of time. The scale of AI investment over the last few years was making the bet that AI adoption would keep growing very very fast and would revolutionize the productivity and profitability of the firms that integrated it. That's not happening yet, which suggests the bet may have been too big or too fast, leaving a lot of investors in an increasingly uncomfortable position.
It has been my experience that technology has to perform significantly better than people do before it gets massively adopted. Self driving cars come to mind. Tesla has self driving that almost works everywhere but Waymo has self driving that really works in certain areas. Adoption rates for consumers has been much higher with Waymo (I was surrounded by 4 yesterday) and they are expanding rather rapidly. I have yet to see a self driving Tesla.
Companies are shoving AI into everything and making it intrusive into everyone's workflow. Thus they can show how "adoption" is increasing!
But adoption and engagement don't equal productive, useful results. In my experience it simply doesn't and the bottom is going to fall out on all these adoption metrics when people see the productivity gains aren't real.
The only place I've seen real utility is for coding. All other tasks, such as Gemini for document writing, produces something that's about 80% ok, and 20% errors and garbage. The work of going back through with a fine toothed comb to root out the garbage is actually more work and less productive than any simply writing the darn thing from scratch.
I fear that the future of AI driven productivity is going to push a mountain of shoddy work into the mainstream. Imagine if the loan documents for your new car had all the qualities of a spam email. It's going to be a nightmare for the administrative world to untangle what is real from the AI slop.
People are captivated by good stories, and AI makes for one hell of a sci fi narrative
It's hard to separate the maybe one day plausible fictional future from the on-the-ground reality
Let's check agentic AI. Which agents do people mostly talk about? Aha - coding agents!
> In recent earnings calls, nearly two-thirds of executives at S&P 500 companies mentioned AI. At the same time, the people actually responsible for implementing AI may not be as forward-thinking, perhaps because they are worried about the tech putting them out of a job.
Ah, those brave, forward-looking executives with their finger on the pulse of the future while their employees are just needlessly stalling adoption. Completely absent from the article is the possibility that the technology is not as revolutionary as claimed.