> The EU trails the US not only in the absolute number of AI-related patents but also in AI specialisation – the share of AI patents relative to total patents.
E.U. patent law takes a very different attitude towards software patents than the U.S. Even if that wasn't the case: “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing? Not something you can just throw out there as a presupposition without explaining your reasoning.
> “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing?
I don’t think the authors claim we should have 100% specialisation. They just say that the fact that the EU has fewer AI-related patents as a proportion of the total (less specialisation) is evidence that it is behind in AI. That seems reasonable.
> Perhaps it will make patent trolling a bit harder because it is easier to look up existing work and to check if an idea is obvious?
Haha, funny :)
No, it'll be like the rest of the industries that use more AI, they'll spend the same amount of effort (as little as possible) and won't validate anything, and provide worse service, not better. AIslop is everywhere, and seemingly unavoidable for companies to use more and more to cut more corners.
I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
The killer app for AI might just be unenshittifying search for a couple of years.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
Don't think that is a fair point, the manipulation was done on a topic of which there are hardly any other sources (hot dog eating competition winner). If you want to manipulate what an AI tells you is the F-150 street price, you will complete with hundreds of sources. The AI will unlikely pick yours.
I used to be able to google a question like that and get an accurate answer within the top 3 results nearly every time about 20 years ago. Then it got worse and worse and became pretty much completely useless about 10 years ago.
Now AI will give me a confident answer that is outright wrong 20% of the time or kind of right but not really 30% of the time. So now I ask something using an AI chatbot and carefully word it so as to have it not get off topic and focus on what I actually want to know, wait 30 seconds for its long ass answer to finish, skim it for the relevant parts, then google the answer and try to see where the AI sourced its answer from and determine whether it misinterpreted/mixed up results or it's accurate. What used to be a 10 second google search is now a 2-3 minute exercise.
I can see very much how people say AI has somehow led to productivity losses. It's shit like this, and it floods the internet and makes real info harder to find, making this cycle worse and worse and take more and more time for basic stuff.
Same experience here. I have fond memories of “google code”, a search engine for code databases which was exceptionally good for finding literal quotes.
The more mainstream a subject is, the lower the incidence of hallucinations. With google search, the mantra “I can’t be the first with this problem/question” almost always proves to be right.
I’m in the process of restoring a piece of vintage electronics and everytime I ask gemini (fast or thinking) for help I’m getting sent down an irrelevant rabbit hole. It’s taking info from service manuals of other equipment with a similar product number, misinterpreting diagrams, getting electrical workings wrong.
These things aren’t AI. AI can extract certainty from uncertain data. LLMs take data and turn it into garbage.
Web scraping for LLMs has almost completely ruined the search experience. In the past I could search for simple questions, and quickly get an answer without even having to click through to the links.
This was horrible for web traffic, but the utility level was off the charts. It was possible to get accurate results in milliseconds. It was faster than using an LLM.
Now sites put almost no info in the search result headers, to get people to click through. I think this will work on some users, but most will start using LLMs as search by default.
Search engines have gotten so bad that I almost feel forced to try running SearXNG or some other search engine locally. Its a pain to set up, but degooglefication is always worth it.
We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.
It is not horrible, it reached the point of absolute excellence. Not for you, the user - but for making money for the creator. Remember, no one paid for web search, so you are the product. If you are the provider of the web search engine, the point of having web search is not deliver the best search result to the user, but maximize the amount of money you can make from the sum of the world population. And google did very good in maximizing their profits, without users turning away from them.
then declined as sponsored results and SEO degraded things
It didn't decline because of this. It declined because of a general decade long trend of websites becoming paywalled and hidden behind a login. The best and most useful data is often inaccessible to crawlers.
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
> The best and most useful data is often inaccessible to crawlers.
Interesting point.
> ost open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc
Ironically isn't one of the reasons some of those platforms started to use logins was so they could track users and better sell their information to ad people?
Obviously now there are other reasons as well - regulation, age verification etc.
Does this suggest that the AI/ad platforms need to tweak their economic model to share more of the revenue with content creators?
FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?
> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.
Because even if an organisation hasn't rolled out generative AI tools and policies centrally yet, individuals might just use their personal plans anyway (potentially in violation with their contract)? I believe that's called "shadow AI".
Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.
Agreed. We've been on the agentic coding roller coaster for only about 9-10 months. It only got properly usable on larger repositories around 3-4 months ago. There are a lot of early adopters, grass roots adoption, etc. But it's really still very early days. Most large companies are still running exactly like they always have. Many smaller companies are worse and years/decades behind on modernizing their operations.
We sell SAAS software to SMEs in Germany. Forget AI, these guys are stuck in the last century when it comes to software. A lot of paper based processes. Cloud is mainly something that comes up in weather predictions for them. These companies don't have budget for a lot of things. The notion that they'll overnight switch to being AI driven companies is arguably more than a bit naive. It indicates a lack of understanding of how the real world works.
There are a lot of highly specialized niche companies that manufacture things that are part of very complex supply chains. The transition will take decades, not months/weeks. They run on demand for products they specialize in making. Their revenue is driven by demand for that stuff and their ability to make and ship it. There are a lot of aspects about how they operate that are definitely not optimal and could be optimized. And AI provides plenty of additional potential to do something about it. But it's not like they were short of opportunities to do so. It takes more than shiny new tools for these companies to move. Change is invasive and disruptive for these companies. And costly. They take the slow and careful perspective to change.
There's a clean split between people that are AI clued in and people working in these companies. The Venn diagram has almost no overlap. It's a huge business opportunity for people that are clued in: a rapidly growing amount of people mainly active in software development. Helping the people on the other side of the diagram is what they'll be mostly doing going forward. There's going to be a huge demand for building AI based stuff for these people. It's not a zero sum game, the amount of new work will dwarf the amount of lost work.
Some of that change is going to be painful. We all have to rethink what we do and re-align our plans in life around that. I'm a programmer. Or I was one until recently. Now I'm a software builder. I still cause software to come into existence. A lot of software actually. But I'm not artisanally coding most of it anymore.
I think people want to read how AI is not working , so those are the articles that are going to get traction.
Personally, I don't think the current frontier models would help the company I work for all that much. The company exists because of the skill in networking and human friendships. The company exist in spite of technological incompetence.
At some level of ability though, a threshold will be reached and a competitor will eat our lunch whole by building a new business around this future model.
It is not going to be a % more productive than our business. It is like the opposite of 0 to 1. The company I work for will go from 1 to zero really quick because we simply won't be able to compete on anything besides those network ties. Those ties will break fast if every other dimension of the business is not even competitive and really in a different category.
Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT
As a counter-point, someone from SAP in Walldorf told me they have access to all models by all companies to their choosing, at a more or less unlimited rate. Don't quote me on that, though, maybe I misunderstood him, it was in a private conversation. Anyway, it sounded like they're using AI heavily.
Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.
OpenAI is buying up like half of the RAM production in the world, presumably on the basis of how great the productivity boost is, so from that perspective this doesn't seem any more premature than the OpenAI scaling plan. And the OpenAI scaling plan is like all the growth in the US economy...
4% isn’t failure! A 4% increase in global GDP would be a big deal (more than what we get in a whole year of progress); and AI adoptionis only just getting started.
Apropos, I once had a boss who said he was running a headcount reduction pilot and anyone who had the time and availability to help him should email him saying how much time they had to spare. I cannot deny this had a satisfying elegance.
I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.
I know at least two different companies in Italy that are very hard on shoving NotebookLM and Gemini down their employees (not IT companies, talking banking/insurance/legal).
Which for the positions/roles involved does make some sense (drafting documents/research).
But it seems like most people are annoyed, because the people shoving those aren't even fully able to show how to leverage the tools, the attitude seems like "you need to do what you do right now under lots of pressure, but also find the time to understand how to use these tools in your own role".
Why is it depressing? Personally, unless the alternative is literally starving, I wouldn't want to do a job that a robot could do instead just so that I could be kept busy. That sounds like an insult to human dignity tbh.
You know what is an insult? Supermarket on my street putting on display sloppy ads with ramen bowl that has 3 different thickness chopsticks and cartoon characters with scrambled faces. Now that is an insult, because there was a human being doing that job, and I am sure there was a great "productivity boost" related to that change.
I am a heavy AI user myself, and sure as hell I am not putting my foot in that place again.
Is it an insult to human dignity? Let’s go through the thought process.
Commodities are used in an enterprise. Some of the commodities are labor. That labor commodity does work. Involving automation. Eventually (so we are told) those labor commodities manage to automate some forms of labor. Making those other labor commodities redundant.
The labor commodities are discarded. Because why (sigh) use a cart when you now have a car? And you don’t even own a horse.
All of the above is presumably not an insult to human dignity. No. The insult to human dignity is being “kept busy” instead of letting billionaires hoard automation made through human labor.
Of course the real solution is not busywork. But the part about busywork was not on the top of my mind with regards to dignity in this context.
> Personally, unless the alternative is literally starving,
At my job a certain department was very enthusiastic about AI, and were going out of their way to show the top managers that they can leverage it in the best possible way. Maybe they thought they must appear to be at bleeding edge tech-wise, I'm not sure. 75% of people from that dept were let go because of how successful their AI trial was.
I have a hard time understanding what "increased productivity by 4%" actually means and how this metric is measured. One low-digit does not seem high when put into the context and promises, is it?
What stands out for me is that the productivity gains for small and medium-sized enterprises are actually negative. But in Germany, for example, these companies are the backbone of the entire economy. That means it would be interesting to know how the average was calculated, what method was used, what weighting was applied, etc.
All in all, it's an interesting study, but it leaves out a lot, such as long-term effects, new dependencies, loss of skills, employee motivation, and much more.
I cannot read the paper that this article is based on, but it seems that it refers to the use of big data analytics and AI in 2024, not LLM. It concludes that the use of AI leads to a 4% increase in productivity. Nowadays the debate over AI productivity centers around LLMs, not big data analytics. This article does not seem to contradict more recent findings that LLM do not (yet) provide any increased productivity at the company level.
E.U. patent law takes a very different attitude towards software patents than the U.S. Even if that wasn't the case: “Specialisation” means that no innovation unrelated to AI gets mind share, investment, patent applications. And that's somehow a good thing? Not something you can just throw out there as a presupposition without explaining your reasoning.
I don’t think the authors claim we should have 100% specialisation. They just say that the fact that the EU has fewer AI-related patents as a proportion of the total (less specialisation) is evidence that it is behind in AI. That seems reasonable.
Perhaps it will make patent trolling a bit harder because it is easier to look up existing work and to check if an idea is obvious?
Haha, funny :)
No, it'll be like the rest of the industries that use more AI, they'll spend the same amount of effort (as little as possible) and won't validate anything, and provide worse service, not better. AIslop is everywhere, and seemingly unavoidable for companies to use more and more to cut more corners.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
I kind of want to become Amish sometimes.
https://x.com/thomasgermain/status/2024165514155536746
Now AI will give me a confident answer that is outright wrong 20% of the time or kind of right but not really 30% of the time. So now I ask something using an AI chatbot and carefully word it so as to have it not get off topic and focus on what I actually want to know, wait 30 seconds for its long ass answer to finish, skim it for the relevant parts, then google the answer and try to see where the AI sourced its answer from and determine whether it misinterpreted/mixed up results or it's accurate. What used to be a 10 second google search is now a 2-3 minute exercise.
I can see very much how people say AI has somehow led to productivity losses. It's shit like this, and it floods the internet and makes real info harder to find, making this cycle worse and worse and take more and more time for basic stuff.
The more mainstream a subject is, the lower the incidence of hallucinations. With google search, the mantra “I can’t be the first with this problem/question” almost always proves to be right.
I’m in the process of restoring a piece of vintage electronics and everytime I ask gemini (fast or thinking) for help I’m getting sent down an irrelevant rabbit hole. It’s taking info from service manuals of other equipment with a similar product number, misinterpreting diagrams, getting electrical workings wrong.
These things aren’t AI. AI can extract certainty from uncertain data. LLMs take data and turn it into garbage.
This was horrible for web traffic, but the utility level was off the charts. It was possible to get accurate results in milliseconds. It was faster than using an LLM.
Now sites put almost no info in the search result headers, to get people to click through. I think this will work on some users, but most will start using LLMs as search by default.
Search engines have gotten so bad that I almost feel forced to try running SearXNG or some other search engine locally. Its a pain to set up, but degooglefication is always worth it.
The result started with 3 "sponsored links" which threw her down the rabbit hole.
This used to be easy.
I found it a sad condemnation of how far the tech industry has fallen into enshittification and is failing to provide tools that are actually useful.
It is not horrible, it reached the point of absolute excellence. Not for you, the user - but for making money for the creator. Remember, no one paid for web search, so you are the product. If you are the provider of the web search engine, the point of having web search is not deliver the best search result to the user, but maximize the amount of money you can make from the sum of the world population. And google did very good in maximizing their profits, without users turning away from them.
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
Interesting point.
> ost open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc
Ironically isn't one of the reasons some of those platforms started to use logins was so they could track users and better sell their information to ad people?
Obviously now there are other reasons as well - regulation, age verification etc.
Does this suggest that the AI/ad platforms need to tweak their economic model to share more of the revenue with content creators?
Used to be.
> Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
Now.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
If rollout at Deloitte has not yet begun... How on earth did this clusterfuck [0] happen?
> Deloitte’s member firm in Australia will pay the government a partial refund for a $290,000 report that contained alleged AI-generated errors, including references to non-existent academic research papers and a fabricated quote from a federal court judgment.
[0] https://fortune.com/2025/10/07/deloitte-ai-australia-governm...
[0]: If it contains references to nonexistent papers and fabricated quotes, the conclusions of the report are highly doubtful at best.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?
We sell SAAS software to SMEs in Germany. Forget AI, these guys are stuck in the last century when it comes to software. A lot of paper based processes. Cloud is mainly something that comes up in weather predictions for them. These companies don't have budget for a lot of things. The notion that they'll overnight switch to being AI driven companies is arguably more than a bit naive. It indicates a lack of understanding of how the real world works.
There are a lot of highly specialized niche companies that manufacture things that are part of very complex supply chains. The transition will take decades, not months/weeks. They run on demand for products they specialize in making. Their revenue is driven by demand for that stuff and their ability to make and ship it. There are a lot of aspects about how they operate that are definitely not optimal and could be optimized. And AI provides plenty of additional potential to do something about it. But it's not like they were short of opportunities to do so. It takes more than shiny new tools for these companies to move. Change is invasive and disruptive for these companies. And costly. They take the slow and careful perspective to change.
There's a clean split between people that are AI clued in and people working in these companies. The Venn diagram has almost no overlap. It's a huge business opportunity for people that are clued in: a rapidly growing amount of people mainly active in software development. Helping the people on the other side of the diagram is what they'll be mostly doing going forward. There's going to be a huge demand for building AI based stuff for these people. It's not a zero sum game, the amount of new work will dwarf the amount of lost work.
Some of that change is going to be painful. We all have to rethink what we do and re-align our plans in life around that. I'm a programmer. Or I was one until recently. Now I'm a software builder. I still cause software to come into existence. A lot of software actually. But I'm not artisanally coding most of it anymore.
Personally, I don't think the current frontier models would help the company I work for all that much. The company exists because of the skill in networking and human friendships. The company exist in spite of technological incompetence.
At some level of ability though, a threshold will be reached and a competitor will eat our lunch whole by building a new business around this future model.
It is not going to be a % more productive than our business. It is like the opposite of 0 to 1. The company I work for will go from 1 to zero really quick because we simply won't be able to compete on anything besides those network ties. Those ties will break fast if every other dimension of the business is not even competitive and really in a different category.
These are not the openclaw folks
Genuinely confused, I don't get it
The Internet has been getting worse pretty steadily for 20 years now
"The Internet" is completely dead. Both as an idea and as a practical implementation.
No, Google/Meta/Netflix is not the "world wide web", they're a new iteration of AOL and CompuServe.
For those hearing this at work, better prepare an exit plan.
Which for the positions/roles involved does make some sense (drafting documents/research).
But it seems like most people are annoyed, because the people shoving those aren't even fully able to show how to leverage the tools, the attitude seems like "you need to do what you do right now under lots of pressure, but also find the time to understand how to use these tools in your own role".
I am a heavy AI user myself, and sure as hell I am not putting my foot in that place again.
Commodities are used in an enterprise. Some of the commodities are labor. That labor commodity does work. Involving automation. Eventually (so we are told) those labor commodities manage to automate some forms of labor. Making those other labor commodities redundant.
The labor commodities are discarded. Because why (sigh) use a cart when you now have a car? And you don’t even own a horse.
All of the above is presumably not an insult to human dignity. No. The insult to human dignity is being “kept busy” instead of letting billionaires hoard automation made through human labor.
Of course the real solution is not busywork. But the part about busywork was not on the top of my mind with regards to dignity in this context.
> Personally, unless the alternative is literally starving,
To put a fine point on it, yeah? Ultimately.
More people without jobs will be a heavy burden on social security systems, so in the end it's literally about starving.
If anyone still resigns that is. They seem to have automated that too.
If the manager doesn’t have ideas, it is they who deserve the boot.
All in all, it's an interesting study, but it leaves out a lot, such as long-term effects, new dependencies, loss of skills, employee motivation, and much more.
Dead Comment