OK, but: that's an economic situation.
> so much less scope for engagement-hacking, dark patterns, useless upselling, and so on.
Right, so there's less profit in it.
To me it seems this will make the market more adversarial, not less. Increasing amounts of effort will be expended to prevent LLMs interacting with your software or web pages. Or in some cases exploit the user's agentic LLM to make a bad decision on their behalf.
I mean, services _could_ make it harder to use LLMs to interact with them, but if agents are popular enough they might see customers start to revolt over it.
Related: Have you seen nvidea with their simulated 3d env. That might not be called llm but it’s not very far away from what our llm actually do right now. It’s just a naming difference
An LLM which makes a tool call to a function called `ride_bike`, where that function is a different sort of model with a different set of feedback mechanisms than those available to the LLM, is NOT the same thing at all. The LLM hasn't "learned" to ride the bike. The best you can say is that the LLM has learned that the bike can be ridden, and that it has a way of asking some other entity to ride on its behalf.
Now, could you develop such a model and make it available to an LLM? Sure, probably. But that's not an LLM. Moreover, it involves you, a human, making novel inroads on a different sort of AI/robotics problem. It simply is not possible to accomplish with an LLM.