I think MCP is awesome, mainly because it forces devs to design the simplest possible tools/APIs/functions so even an average-performance LLM can use them correctly to get things done.
As developers, we often want everything to be rich, verbose, and customizable — but the reality is that for most users (and now for AIs acting on their behalf), simplicity wins every time. It’s like designing a great UI: the fewer ways you can get lost, the more people (or models) can actually use it productively.
If MCP ends up nudging the ecosystem toward small, well-defined, composable capabilities, that’s a win far beyond just “AI integration.”
I don’t like MCP because it relies on good faith from the plugin provider. It works great in closed, trusted environments but it cannot scale across trust boundaries.
It just begs for spam and fraud, with badly-behaving services advertising lowest-cost, highest-quality, totally amazing services. It feels like the web circa 1995… lots of implicit trust that isn’t sustainable.
Totally agree - the true source of all of the value here is the new incentive to write very simple services with very simple documentation and to make that documentation easily discoverable.
It fills a gap that exists in most service documentation: an easily discoverable page for developers (specifically, those who already know how to use their ecosystem of choice's HTTP APIs) that has a very short list of the service's most fundamental functionality with a simplified specification so they can go and play around with it.
I’m just getting into MCP (building my own server and trying some canned ones), and one thing I’ve noticed — some servers seem to devour your context window before you’ve even typed a single token / question.
There’s very little actual engineering going in to designing MCP interfaces to actually efficiently work with the way LLM workflows actually operate. Many MCPs offer tools that allow an LLM to retrieve a list of ‘things that exist’ with the expectation the LLM will then pick something out of that list for further action with a different tool. There’s very little evidence that LLMs are actually good at using tools that work like that, and massive lists of ‘things that exist’ eat tokens and context.
Many businesses are rushing to put out something that fits the MCP standard but not taking the time to produce something that lets an LLM achieve things with their tool.
> Many businesses are rushing to put out something that fits the MCP standard but not taking the time to produce something that lets an LLM achieve things with their tool
I think they'll have a while where they can get away with this approach too. For a good while, most people will probably blame the AI or the model if it doesn't use Atlassian tools well. It'll probably be quite some time before people start to notice that Atlassian specifically doesn't work well, but the almost all their other tools do.
(More technical users might notice sooner—obviously, given the context of this thread—but I mean enough of a broader user base noticing to have reputational impact)
MCPs are the most basic solution possible. Shoving the tool definitions into a vector store and having a subagent search for relevant tools, then another subagent to run the tools would greatly reduce impact on context. I think it’d work in theory, but it’s so annoying to have to do something like this. We’re still in a world where we have some building blocks rather than full fledged toolboxes.
> MCPs are the most basic solution possible. Shoving the tool definitions into a vector store and having a subagent search for relevant tools, then another subagent to run the tools would greatly reduce impact on context.
That's a toolchain design approach that is independent of MCPs. A toolchain using MCP could do that and there would be no need for any special support in the protocol.
Most UIs currently are unsophisticated and let you turn on or off the tools on server-by-server basis. For some large servers (especially if they act as aggrgeators) this approach isn't going to be desirable and you are going to want to select individual tools to activate, not servers. But that's a UI issue more than a protocol issue.
We've been thinking that an intermediate (virtual) server layer might be helpful here. Actively working on something to solve that now and looking for feedback, please reach out if interested.
The way the function is described to the LLM matter. Even when the parameters are the same and the effect is the same the title and description can fundamentally influence how the task is performed.
The other issue is that you cannot think of MCP servers as universal pluggable systems that can fit into every use-case with minimal wrapping. Real world scenarios require pulling a lot of tricks. Caching can be done at higher or lower level depending on the use-case. Communication of different information from the MCP server also is different depending on the use-case (should we replace these long IDs for shorter IDs that are automatically translated to longer ones). Should we automatically tinyurl all the links to reduce hallucination? Which operations can be effectively solved with pure algorithms (compress 2-3 operations into one) because doing this with LLMs is not only error-prone but also not optimal (imagine using LLM to grep for strings in many files one by one using tool calls rather than using grep to search for strings - not the same)
There are so many things to consider. MCP is nice abstraction but it is not a silver bullet.
Speaking from experience with actual customers and real use-case.
This is Web 2.0. You're in the process of rediscovering mashups. Before it was SOAP and REST/HTTP and now it's...well, it's still kind of REST/HTTP because MCP is JSON-RPC. There was this brief, beautiful period where every "learn to code" book ended with a couple of chapters of how to make your app do google searches and throw the results into a word graph or something before all the big tech companies locked that sort of access down.
Weirdly, I'm a little optimistic that it might work this time. AI is hot, which means that suddenly we don't care about IP anymore, and if AIs are the ones that are mostly using this protocol, providers will perhaps be in less of a rush to block everybody from doing cool things.
You’re totally right, and that’s why I think this era will fail.
Web 2.0 failed because eventually people realized to make money they needed to serve ads, and to do that they needed to own the UI. Making it easy to exfiltrate data meant switching cost was low, too. Can’t have that when you’re trying to squeeze value out of users. Look at the evolution of the twitter API over the 2.0 era. That was entirely motivated by Twitter’s desperate need to make money through ads.
Only way we avoid that future is if we figure out new business models, but I doubt that will happen. Ads are too lucrative and people too resistant to pay for things.
ads really aren’t all that lucrative, though, they’re just simple. I worked for a company that was trying to figure out an alternative to ad revenue (we failed) and our people did some research and the average internet user ends up being shown (if I remember correctly) like $60/month of ads, total.
Where the goal was to have a site's data as machine readable so that could be mashed up into something new? Instead of making it easier to gather the big sites locked the bulk of their data down so it never gained widespread adoption
Web 2.0 is what we mostly have now -- social, user generated content and interaction
I think 3.0 as a term was taken over by those weird crypto guys with their alternative dns roots and suffixes, that literally no one uses. “Own your domain forever!!*”
* Disclaimer: domain not usable for any purposes except on computers where you have root to install their alternative resolver
The more I look into MCP, the less I understand the hype. It's an OK API that describes how to fetch list of tools and resources and retrieve them, somehow this is supposed to be the standard for AI and environment communication, and...that's it? Am I missing something vital there?
Yep, it's like 5 json schemas in a trench coat. Not not useful but also not some revolution. The biggest win seems to be just convincing rando 3rd parties to go along with it.
Web services started as the same open utopia. Once everyone was in they jacked up the prices so high it killed the initial consumer apps (eg. Google Maps and Reddit)
Nobody is giving access to their walled garden for the good of open-anything. It's what the VCs and stockholders demand and they're the ones putting up the cash to keep this AI hype funded in spite of it running at a loss.
Given they haven't put security into MCP yet, I guess they'll need to do that first before they move on reinventing API keys so they can charge for access and hailing that as the next reason the line will go up.
If MPC develops as an open standard and this "loophole" gains enough popularity, then companies can only really limit access to their services. This seems like a generally good thing that will enhance automation. In some cases it doesn't really matter if the automation comes from agents, hard coded software or a combination of the two.
Some remote MCPs will get locked down to know client endpoints, same as any other HTTP service, so if companies are really concerned about them being AI-use-only and don't mind cutting off some AI use to preserve that exclusivity, they’ll lock it down to the big known hosted AI frontends (assuming those end up supporting MCP; the only one I know of that does currently is ChatGPT Deep Research, and only for very limited shapes of servers.)
OTOH, that only effects those services, it won't stop people from leveraging MCP as, say, a generic local plugin model for non-AI apps.
As developers, we often want everything to be rich, verbose, and customizable — but the reality is that for most users (and now for AIs acting on their behalf), simplicity wins every time. It’s like designing a great UI: the fewer ways you can get lost, the more people (or models) can actually use it productively.
If MCP ends up nudging the ecosystem toward small, well-defined, composable capabilities, that’s a win far beyond just “AI integration.”
It just begs for spam and fraud, with badly-behaving services advertising lowest-cost, highest-quality, totally amazing services. It feels like the web circa 1995… lots of implicit trust that isn’t sustainable.
It fills a gap that exists in most service documentation: an easily discoverable page for developers (specifically, those who already know how to use their ecosystem of choice's HTTP APIs) that has a very short list of the service's most fundamental functionality with a simplified specification so they can go and play around with it.
My favorite example is the public Atlassian one — https://www.atlassian.com/blog/announcements/remote-mcp-serv...
Even with Claude or Gemini CLI (both with generous limits), I run out of context and resources fast.
With local LLMs via LM Studio? Forget it — almost any model will tap out before I can get even a simple question in.
Many businesses are rushing to put out something that fits the MCP standard but not taking the time to produce something that lets an LLM achieve things with their tool.
I think they'll have a while where they can get away with this approach too. For a good while, most people will probably blame the AI or the model if it doesn't use Atlassian tools well. It'll probably be quite some time before people start to notice that Atlassian specifically doesn't work well, but the almost all their other tools do.
(More technical users might notice sooner—obviously, given the context of this thread—but I mean enough of a broader user base noticing to have reputational impact)
That's a toolchain design approach that is independent of MCPs. A toolchain using MCP could do that and there would be no need for any special support in the protocol.
Eg. give the model your login token/cookies so it can curl the pages and interact with them - or have it log in as you with Playwright MCP.
The other issue is that you cannot think of MCP servers as universal pluggable systems that can fit into every use-case with minimal wrapping. Real world scenarios require pulling a lot of tricks. Caching can be done at higher or lower level depending on the use-case. Communication of different information from the MCP server also is different depending on the use-case (should we replace these long IDs for shorter IDs that are automatically translated to longer ones). Should we automatically tinyurl all the links to reduce hallucination? Which operations can be effectively solved with pure algorithms (compress 2-3 operations into one) because doing this with LLMs is not only error-prone but also not optimal (imagine using LLM to grep for strings in many files one by one using tool calls rather than using grep to search for strings - not the same)
There are so many things to consider. MCP is nice abstraction but it is not a silver bullet.
Speaking from experience with actual customers and real use-case.
Not only that, apparently we finally got Jini and Agent Tcl back!
https://www.usenix.org/conference/fourth-annual-usenix-tcltk...
https://www.eetimes.com/jini-basics-interrelating-with-java/
Weirdly, I'm a little optimistic that it might work this time. AI is hot, which means that suddenly we don't care about IP anymore, and if AIs are the ones that are mostly using this protocol, providers will perhaps be in less of a rush to block everybody from doing cool things.
Web 2.0 failed because eventually people realized to make money they needed to serve ads, and to do that they needed to own the UI. Making it easy to exfiltrate data meant switching cost was low, too. Can’t have that when you’re trying to squeeze value out of users. Look at the evolution of the twitter API over the 2.0 era. That was entirely motivated by Twitter’s desperate need to make money through ads.
Only way we avoid that future is if we figure out new business models, but I doubt that will happen. Ads are too lucrative and people too resistant to pay for things.
Where the goal was to have a site's data as machine readable so that could be mashed up into something new? Instead of making it easier to gather the big sites locked the bulk of their data down so it never gained widespread adoption
Web 2.0 is what we mostly have now -- social, user generated content and interaction
* Disclaimer: domain not usable for any purposes except on computers where you have root to install their alternative resolver
And with the cynicism out of the way, what an insightful and refreshing article!
Web services started as the same open utopia. Once everyone was in they jacked up the prices so high it killed the initial consumer apps (eg. Google Maps and Reddit)
Nobody is giving access to their walled garden for the good of open-anything. It's what the VCs and stockholders demand and they're the ones putting up the cash to keep this AI hype funded in spite of it running at a loss.
Given they haven't put security into MCP yet, I guess they'll need to do that first before they move on reinventing API keys so they can charge for access and hailing that as the next reason the line will go up.
OTOH, that only effects those services, it won't stop people from leveraging MCP as, say, a generic local plugin model for non-AI apps.