*The shift is coming relatively soon thanks to the latest RL breakthroughs (I really encourage to give a look at Will Brown talk). Anthropic and OpenAI are close to nail long multi-task sequences on specialized tasks.
*There are stronger incentives to specialize the model and gate them. They are especially more transformative on the industry side. Right now most of the actual "AI" market is still largely rule-based/ML. Generative AI was not robust enough but now these systems can get disrupted — not to mention many verticals with a big focus on complex yet formal tasks. I know large network engineering co are upscaling their own RL capacities right now.
*Open source AI is distanced so far due to lack of data/frameworks for large scale RL and tasks related data. Though we might see a democratization of verifiers, it will take time.
Several people from big labs reached out since then and confirmed that, despite the obvious uncertainties, this is relatively one point.
Nice and provocative read! Is it fair to restate the argument as follows?
- New tech (eg: RL, cheaper inference) are enabling agentic interactions that fulfill more of the application layer.
- Foundation model companies realize this and are adapting their business models by building complementary UX and witholding API access to integrated models.
- Application layer value props will be squeezed out, disappointing a big chunk of AI investors and complementary infrastructure providers
If so, any thoughts on the following?
- If agentic performance is enabled by models specialized through RL (e.g. Deep Research's o3+browsing), why won't we get open versions of these models that application providers can use?
- Incumbent application providers can put up barriers to agentic access of the data they control. How does their data incumbency and vertical specialization weigh against the relative value of agents built by model providers?
* Well I'm very much involved in making open more models, pretrained the first model on free and open data without copyrigh issues, released the first version fo GRPO that can run on Google Colab (based on Will Brown). Yet, even then I have to be realistic: open source RL has a data issue. We don't have the action sequence data nor the recipes (emulators) that could make it possible to replicate even on a very small scale what big labs are currently working on.
* Agreed on this and I'm seeing this dynamic already in a few areas. Now it's still going to be uphill as some of the data can be bought and advanced pipelines can shortcut some of the need for it, as models can be trained directly on simulated environments.
answering the first question if i understand it correctly.
The missing piece is data obviously. With search and code, it's easier to get the data so you get such specialized products. What is likely to happen is: 1/ Many large companies work with some early design partners to develop solutions. They have the data + subject matter expertise, and the design partners bring in the skill. This way we see a new wave of RL agent startups grow. My guess is that this engagement would look different compared to a typical saas engagement. Some companies might do it inhouse, some wont because maintaining such systems is a task. 2/ These companies open source part of their dataset which can be consumed by oss devs to create better agents. This is more common in tech where a path to monopoly is to commoditize the immediately previous layer. Might play out elsewhere too, though I do not have a high degree of confidence here.
* RL is Reinforcement Learning. Already used for a while as part of RLHF but now we have started to find a very nice combo of reasoning+RL on verifiable tasks. Core idea is that models are not just good a predicting the next token but the next right answer.
* I think anything infra with already some ML bundled is especially up for grabs but this will have a more transformative impact than your usual SaaS. Network engineering is a good example: highly formalized but also highly complex. RL models could increasingly nail that.
> So what is happening right now is just a lot of denial. The honeymoon period between model providers and wrappers is over.
> In short the dilemma for most successful wrappers is simple: training or being trained on. What they are doing right now is both free market research for the big labs but, even, as all outputs is ultimately generated through model providers, free data design and generation.
This is a great observation on the current situation. Over the past few years, there's been a proliferation of AI wrappers in the SaaS space; however, because they're use proprietary models, they become entirely dependent on the model providers to continue to offer their solution, there's little to no barrier to entry to create a competing product, and they're providing free training data to the model providers. Instead, as the article suggests, SaaS builders should look into open source models (from places like Github, HuggingFace, or paperswithcode.com) or consider researching their own, and training custom models if they want to offer long-term services to their users.
> I've read a lot of misunderstandings about DeepResearch, which isn't helped by the multiplication of open and closed clones. OpenAI has not built a wrapper on top of O3.
It also doesn't help that they let you select a model varying from 4o-mini to o1-pro for the Deep Research task. But this confirms my suspicion that model selection is irrelevant for the Deep Research tasks and answering follow-up questions.
> Weirdly enough, while Claude 3.7 works perfectly in Claude Code, Cursor struggles with it and I've already seen several high end users cancelling their subscriptions as a result.
It's because Claude Code burns through tokens like there's no tomorrow, meanwhile Cursor attempts to carefully manage token usage and limit what's in context to remain profitable. It's gotten so bad that for any moderately complex task I switch to o1-pro or sonnet-3.7 in the Anthropic Console and max out the thinking tokens. They just released a "MAX" option but I can still tell its nerfed because it thinks for a few seconds whereas I can get up to 2 minutes of thinking via Anthropic Console.
Its abundantly clear that all these model providers are trying to pivot hard into productizing, which is ironic considering that the UX of all these model-as-a-product companies is so universally terrible. Deep Research is a major win, but OpenAI has plenty of fails: Plugins, Custom GPTs, Sora, Search (obsolete now?), Operator are maybe just okay for casual users - not at all a "product".
Anecdotally I noticed this in aider with 3.7 as well; the responses coming back from Claude 3.7 are wayyy more tokens than 3.5(+), and the model is a little less responsive to aider’s prompts. Upshot - it benchmarks better, but is frustrating to use and slower.
Using claude code, it’s clear that Anthropic knows how to get the best out of their model — and, the output spewing is hidden in the interface. I am now using both, depending on task.
When did people ever believe that model selection mattered when using Deep Research? The UI may be bad, but it was obvious from day one that it followed its own workflow.
Search within ChatGPT is far from obsolete. 4o + Search remains a significant advantage in both time and cost when handling real-time, single-step queries—e.g., What is the capital of Texas?
If you have not been reading every OpenAI blog post, you can't be blamed for thinking the model picker affects Deep Research, since the UI heavily implies that.
Single-step queries are far better handled by Kagi/Google search when you care about source quality, discovery and good UX, anything above that it's worth letting Deep Research do its thing in the background. I would go so far as say using Search with 4o you risk getting worse results than just asking the LLM directly - or at least that's been my experience.
The argument appears to be "last-mile" specialisation of AI models by massive-compute-companies will be entirely proprietary, and walled-off to prevent competitor extraction of data. And these scenario/domain/task specific models will be the product sold by these companies.
This is plausible insofar as one can find a reason to suppose compute costs for this specialisation will remain very high, and the hardwork of producing relevant data will be done best by those same companies.
I think its equally plausible compute will come down enough, and innovations in "post-training re-training" will occur, that you'll be able to bring this in-house within the enterprise/org. Ie., that "ML/AI Engineer" teams will arise like SEng teams.
Or that there's a limit to statistical modelling over historical cases, that means specailisation is so exponentially demanding on historical case data production, that it cannot practically occur in places which would most benefit from it.
I think the latter is what will prevent the mega players in AI atm making "the model the product" -- at the level they can specialise (ie., given the amount of data needed), so can everyone else.
Perhaps these companies will transition into something SaaS-like, AI-Model-Specialisation-As-A-Service (ASS ASS) -- where they create bespoke models for orgs which can afford it.
The idea is to create bespoke models for org at 90% lower compute. (we cheat a little, where we use an underlying open source model and freeze the existing knowledge). Currently building a specialized model + agent for bioresearch labs. we hope to bring down the costs in long term so that these evolved into continuous learning systems that can be updated everyday. The idea is exactly this: model customization + infra gives you the advantages Prompting + tooling cannot.
> Inference cost are in free fall. The recent optimizations from DeepSeek means that all the available GPUs could cover a demand of 10k tokens per day from a frontier model for… the entire earth population. There is nowhere this level of demand. The economics of selling tokens does not work anymore for model providers: they have to move higher up in the value chain.
I've been using Cline so I can understand the pricing of these models and it's insane how much goes into input context + output. My most recent query on openrouter.ai was 26,098 input tokens -> 147 output tokens. I'm easily burning multiple dollars an hour. Without a doubt there is still demand for cheaper inference.
I don't disagree, but I think we need to be careful of our vocabulary around the word "model." People are starting to use it to refer to the whole "AI system", rather than the actual transformer model.
This article is talking about models that have been trained specifically for workflow orchestration and tool use. And that is an important development.
But the fundamental architectural pattern isn't different: You run the model in some kind of harness that recognizes tool use invocations, calls to the external tool/rag/codegen/whatever, then feeds the results back into the context window for additional processing.
Architecturally speaking, the harness is a separate thing from the language model. A model can be trained to use Anthropic's MCP, for example, but the capabilities of MCP are not "part" of the model.
A concrete example: A model can't read a webpage without a tool, just like a human can't read a webpage without a computer and web browser.
I just feel like it's important to make a logical distinction between a model and the agentic system using that model. Innovation in both areas is going to proceed along related but different paths.
While I appreciate the distinction you're pointing out, I disagree with your conclusion that the agentic system and its environment will remain separate going forward. There are strong incentives to merge the external environment more closely with the model's environment. I can imagine a future where GPUs have a direct network interface an os-like engine that allows them to interoperate with the external environment more directly.
It seems like a natural line of progress as RL is becoming mainstream for language models; if you can build the verifier into the GPU itself, you can drastically speed up training runs and decrease inference costs.
> Inference cost are in free fall. The recent optimizations from DeepSeek means that all the available GPUs could cover a demand of 10k tokens per day from a frontier model for… the entire earth population. There is nowhere this level of demand. The economics of selling tokens does not work anymore for model providers: they have to move higher up in the value chain.
Even before DeepSeek, the prices were declining by about 90% per year when keeping performance constant. The way to think about economics is different I think. Think of it as any other industry that is on a learning curve like chips, batteries, solar panels, or you name it. The price in these industries keeps falling each year. The winners are the companies that can keep scaling up their production. Think TSMC for example. Nobody can produce high quality chips for a lower price than TSMC due to economies of scale. For instance, one PhD at the company can spend 4 years optimizing a tiny part of the process. But it’s worth it because if it makes the process 0.001% cheaper to run then the PhD paid itself back on the TSMC scale.
So the economics for selling tokens does work. The question is who can keep scaling up long enough so that the rest (has to) give up.
I'm confused by the language here; it seems "model" means different things.
To me a "model" is a static file containing numbers. In front of that file is an inference engine that receives input from a user, runs it through the "model" and outputs the result. That inference engine is a program (not a static file) that can be generic (can run any number of models of the same format, like llama.cpp) or specific/proprietary. This program usually offers an API. "Wrappers" talk to those APIs and therefore, don't do much (they're neither an inference engine, nor a model) -- their specialty is UI.
But in this post it seems the term "model" covers a kind of full package that goes from LLM to UI, including a specific, dedicated inference engine?
If so, the point of the article would be that, because inference is in the process of being commoditized, the industry is moving to vertical integration so as to protect itself and create unique value propositions.
I find the distinction you draw between weights and a program interesting - partially the idea that one is a “static file” and the other isn’t.
What makes a file non-static (dynamic?) other than +x?
Both are instructions about how to perform a computation. Both require other software/hardware/microcode to run. In general, the stack is tall!
Even so, I do agree that “a bunch of matrices” feels different to “a bunch of instructions” - although arguably the former may be closer in architecture to the greatest computing machine we know (the brain) than the latter.
Arguably the distinction between a .guff file and a .guff file with a llama.cpp runner slapped in front of it is negligible. But it does raise an interesting point the article glosses over:
There is a lot happening between a model file sitting on a disk and serving it in an API with attached playground, billing, abuse handling, etc, handling the load of thousands or millions of users calling these incredibly demanding programs. A lot of clever software, good hardware, even down to acquiring buildings and dealing with the order backlog for backup diesel generators.
Improvements in that layer were a large part of what OpenAI to go from the relative obscurity of GPT3.5 to generating massive hype with a ChatGPT anyone could try at a whim. As a more recent example x.ai seems to be struggling with that layer a lot right now. Grok3 is pretty good, but has almost daily partial outages. The 1M context model is promised but never rolls out, instead on some days the served context size is even less than the usual 64k. And they haven't even started making it available on the API.
All of this will be easy when we reach the point where everyone can run powerful LLMs on their own device, but for now just having a 400B parameter model sitting on your hard drive doesn't get your business very far
Yeah, "static" may not be the correct term, and sure, everything is a file. Yet +x makes a big difference. You can't chmod a list of weights and have it "do" anything.
So to clarify: the important product that people will ultimately want is the model. Obviously you need to design an infra/UI around it but that's not the core product.
The really important distinction is between workflow (what everyone use in applied LLM right now) and actual agents. LLM agents can take their own decision, browse online, use tools, etc. without direct supervision as they are directly trained for the task. They internalize all the features of LLM orchestration.
I wouldn't say it is correct. A model is not just a static file containing numbers. Those weights (numbers) you are talking about are absolutely meaningless without the architecture of the model.
The model is the inference engine, a model which can't do inference isn't a model.
An important background is the imminent rise of actual LLM agents I discuss in the next post: https://vintagedata.org/blog/posts/designing-llm-agents
So answering to a few comments:
*The shift is coming relatively soon thanks to the latest RL breakthroughs (I really encourage to give a look at Will Brown talk). Anthropic and OpenAI are close to nail long multi-task sequences on specialized tasks.
*There are stronger incentives to specialize the model and gate them. They are especially more transformative on the industry side. Right now most of the actual "AI" market is still largely rule-based/ML. Generative AI was not robust enough but now these systems can get disrupted — not to mention many verticals with a big focus on complex yet formal tasks. I know large network engineering co are upscaling their own RL capacities right now.
*Open source AI is distanced so far due to lack of data/frameworks for large scale RL and tasks related data. Though we might see a democratization of verifiers, it will take time.
Several people from big labs reached out since then and confirmed that, despite the obvious uncertainties, this is relatively one point.
- New tech (eg: RL, cheaper inference) are enabling agentic interactions that fulfill more of the application layer.
- Foundation model companies realize this and are adapting their business models by building complementary UX and witholding API access to integrated models.
- Application layer value props will be squeezed out, disappointing a big chunk of AI investors and complementary infrastructure providers
If so, any thoughts on the following?
- If agentic performance is enabled by models specialized through RL (e.g. Deep Research's o3+browsing), why won't we get open versions of these models that application providers can use?
- Incumbent application providers can put up barriers to agentic access of the data they control. How does their data incumbency and vertical specialization weigh against the relative value of agents built by model providers?
On the second points:
* Well I'm very much involved in making open more models, pretrained the first model on free and open data without copyrigh issues, released the first version fo GRPO that can run on Google Colab (based on Will Brown). Yet, even then I have to be realistic: open source RL has a data issue. We don't have the action sequence data nor the recipes (emulators) that could make it possible to replicate even on a very small scale what big labs are currently working on.
* Agreed on this and I'm seeing this dynamic already in a few areas. Now it's still going to be uphill as some of the data can be bought and advanced pipelines can shortcut some of the need for it, as models can be trained directly on simulated environments.
The missing piece is data obviously. With search and code, it's easier to get the data so you get such specialized products. What is likely to happen is: 1/ Many large companies work with some early design partners to develop solutions. They have the data + subject matter expertise, and the design partners bring in the skill. This way we see a new wave of RL agent startups grow. My guess is that this engagement would look different compared to a typical saas engagement. Some companies might do it inhouse, some wont because maintaining such systems is a task. 2/ These companies open source part of their dataset which can be consumed by oss devs to create better agents. This is more common in tech where a path to monopoly is to commoditize the immediately previous layer. Might play out elsewhere too, though I do not have a high degree of confidence here.
Since I am not in the AI industry, I think I do not understand few things:
- what is RL? Research Language?
- does it mean that in essence AI companies will switch to writing enterprise software using LLMs integrated with enterprise tools?
[EDIT] Seems like you can even ask a question on HN because 'how dare you not know something?' and gonna be downvoted.
* RL is Reinforcement Learning. Already used for a while as part of RLHF but now we have started to find a very nice combo of reasoning+RL on verifiable tasks. Core idea is that models are not just good a predicting the next token but the next right answer.
* I think anything infra with already some ML bundled is especially up for grabs but this will have a more transformative impact than your usual SaaS. Network engineering is a good example: highly formalized but also highly complex. RL models could increasingly nail that.
This is a great observation on the current situation. Over the past few years, there's been a proliferation of AI wrappers in the SaaS space; however, because they're use proprietary models, they become entirely dependent on the model providers to continue to offer their solution, there's little to no barrier to entry to create a competing product, and they're providing free training data to the model providers. Instead, as the article suggests, SaaS builders should look into open source models (from places like Github, HuggingFace, or paperswithcode.com) or consider researching their own, and training custom models if they want to offer long-term services to their users.
It also doesn't help that they let you select a model varying from 4o-mini to o1-pro for the Deep Research task. But this confirms my suspicion that model selection is irrelevant for the Deep Research tasks and answering follow-up questions.
> Weirdly enough, while Claude 3.7 works perfectly in Claude Code, Cursor struggles with it and I've already seen several high end users cancelling their subscriptions as a result.
It's because Claude Code burns through tokens like there's no tomorrow, meanwhile Cursor attempts to carefully manage token usage and limit what's in context to remain profitable. It's gotten so bad that for any moderately complex task I switch to o1-pro or sonnet-3.7 in the Anthropic Console and max out the thinking tokens. They just released a "MAX" option but I can still tell its nerfed because it thinks for a few seconds whereas I can get up to 2 minutes of thinking via Anthropic Console.
Its abundantly clear that all these model providers are trying to pivot hard into productizing, which is ironic considering that the UX of all these model-as-a-product companies is so universally terrible. Deep Research is a major win, but OpenAI has plenty of fails: Plugins, Custom GPTs, Sora, Search (obsolete now?), Operator are maybe just okay for casual users - not at all a "product".
Using claude code, it’s clear that Anthropic knows how to get the best out of their model — and, the output spewing is hidden in the interface. I am now using both, depending on task.
Search within ChatGPT is far from obsolete. 4o + Search remains a significant advantage in both time and cost when handling real-time, single-step queries—e.g., What is the capital of Texas?
This is plausible insofar as one can find a reason to suppose compute costs for this specialisation will remain very high, and the hardwork of producing relevant data will be done best by those same companies.
I think its equally plausible compute will come down enough, and innovations in "post-training re-training" will occur, that you'll be able to bring this in-house within the enterprise/org. Ie., that "ML/AI Engineer" teams will arise like SEng teams.
Or that there's a limit to statistical modelling over historical cases, that means specailisation is so exponentially demanding on historical case data production, that it cannot practically occur in places which would most benefit from it.
I think the latter is what will prevent the mega players in AI atm making "the model the product" -- at the level they can specialise (ie., given the amount of data needed), so can everyone else.
Perhaps these companies will transition into something SaaS-like, AI-Model-Specialisation-As-A-Service (ASS ASS) -- where they create bespoke models for orgs which can afford it.
I think you are on to something here - and this may very well be what these rumored $20k/mon specialized AI "agents" end up being. https://techcrunch.com/2025/03/05/openai-reportedly-plans-to...
The idea is to create bespoke models for org at 90% lower compute. (we cheat a little, where we use an underlying open source model and freeze the existing knowledge). Currently building a specialized model + agent for bioresearch labs. we hope to bring down the costs in long term so that these evolved into continuous learning systems that can be updated everyday. The idea is exactly this: model customization + infra gives you the advantages Prompting + tooling cannot.
I've been using Cline so I can understand the pricing of these models and it's insane how much goes into input context + output. My most recent query on openrouter.ai was 26,098 input tokens -> 147 output tokens. I'm easily burning multiple dollars an hour. Without a doubt there is still demand for cheaper inference.
This article is talking about models that have been trained specifically for workflow orchestration and tool use. And that is an important development.
But the fundamental architectural pattern isn't different: You run the model in some kind of harness that recognizes tool use invocations, calls to the external tool/rag/codegen/whatever, then feeds the results back into the context window for additional processing.
Architecturally speaking, the harness is a separate thing from the language model. A model can be trained to use Anthropic's MCP, for example, but the capabilities of MCP are not "part" of the model.
A concrete example: A model can't read a webpage without a tool, just like a human can't read a webpage without a computer and web browser.
I just feel like it's important to make a logical distinction between a model and the agentic system using that model. Innovation in both areas is going to proceed along related but different paths.
It seems like a natural line of progress as RL is becoming mainstream for language models; if you can build the verifier into the GPU itself, you can drastically speed up training runs and decrease inference costs.
Even before DeepSeek, the prices were declining by about 90% per year when keeping performance constant. The way to think about economics is different I think. Think of it as any other industry that is on a learning curve like chips, batteries, solar panels, or you name it. The price in these industries keeps falling each year. The winners are the companies that can keep scaling up their production. Think TSMC for example. Nobody can produce high quality chips for a lower price than TSMC due to economies of scale. For instance, one PhD at the company can spend 4 years optimizing a tiny part of the process. But it’s worth it because if it makes the process 0.001% cheaper to run then the PhD paid itself back on the TSMC scale.
So the economics for selling tokens does work. The question is who can keep scaling up long enough so that the rest (has to) give up.
To me a "model" is a static file containing numbers. In front of that file is an inference engine that receives input from a user, runs it through the "model" and outputs the result. That inference engine is a program (not a static file) that can be generic (can run any number of models of the same format, like llama.cpp) or specific/proprietary. This program usually offers an API. "Wrappers" talk to those APIs and therefore, don't do much (they're neither an inference engine, nor a model) -- their specialty is UI.
But in this post it seems the term "model" covers a kind of full package that goes from LLM to UI, including a specific, dedicated inference engine?
If so, the point of the article would be that, because inference is in the process of being commoditized, the industry is moving to vertical integration so as to protect itself and create unique value propositions.
Is this interpretation correct?
What makes a file non-static (dynamic?) other than +x?
Both are instructions about how to perform a computation. Both require other software/hardware/microcode to run. In general, the stack is tall!
Even so, I do agree that “a bunch of matrices” feels different to “a bunch of instructions” - although arguably the former may be closer in architecture to the greatest computing machine we know (the brain) than the latter.
</armchair>
There is a lot happening between a model file sitting on a disk and serving it in an API with attached playground, billing, abuse handling, etc, handling the load of thousands or millions of users calling these incredibly demanding programs. A lot of clever software, good hardware, even down to acquiring buildings and dealing with the order backlog for backup diesel generators.
Improvements in that layer were a large part of what OpenAI to go from the relative obscurity of GPT3.5 to generating massive hype with a ChatGPT anyone could try at a whim. As a more recent example x.ai seems to be struggling with that layer a lot right now. Grok3 is pretty good, but has almost daily partial outages. The 1M context model is promised but never rolls out, instead on some days the served context size is even less than the usual 64k. And they haven't even started making it available on the API.
All of this will be easy when we reach the point where everyone can run powerful LLMs on their own device, but for now just having a 400B parameter model sitting on your hard drive doesn't get your business very far
The really important distinction is between workflow (what everyone use in applied LLM right now) and actual agents. LLM agents can take their own decision, browse online, use tools, etc. without direct supervision as they are directly trained for the task. They internalize all the features of LLM orchestration.
The expression ultimately comes from a slide from OpenAI from 2023 https://pbs.twimg.com/media/Gly1v0zXIAAGJFz?format=jpg&name=... — so in a way its a long held vision in big labs, just getting more accute now.
The model is the inference engine, a model which can't do inference isn't a model.