Readit News logoReadit News
ComplexSystems · 21 days ago
I thought this article was going to be a bunch of security theater nonsense - maybe the relatively bland title - but after reading I found it to be incredibly insightful, particularly this:

> MCP discards this lesson, opting for schemaless JSON with optional, non-enforced hints. Type validation happens at runtime, if at all. When an AI tool expects an ISO-8601 timestamp but receives a Unix epoch, the model might hallucinate dates rather than failing cleanly. In financial services, this means a trading AI could misinterpret numerical types and execute trades with the wrong decimal precision. In healthcare, patient data types get coerced incorrectly, potentially leading to wrong medication dosing recommendations. Manufacturing systems lose sensor reading precision during JSON serialization, leading to quality control failures.

Having worked with LLMs every day for the past few years, it is easy to see every single one of these things happening.

I can practically see it playing out now: there is some huge incident of some kind, in some system or service with an MCP component somewhere, with some elaborate post-mortem revealing that some MCP server somewhere screwed up and output something invalid, the LLM took that output and hallucinated god knows what, its subsequent actions threw things off downstream, etc.

It would essentially be a new class of software bug caused by integration with LLMs, and it is almost sure to happen when you combine it with other sources of bug: human error, the total lack of error checking or exception handling that LLMs are prone to (they just hallucinate), a bunch of gung-ho startups "vibe coding" new services on top of the above, etc.

I foresee this being followed by a slew of Twitter folks going on endlessly about AGI hacking the nuclear launch codes, which will probably be equally entertaining.

cookiengineer · 21 days ago
Let's put it this way:

Before 2023 I always thought that all the bugs and glitches of technology in Star Trek were totally made up and would never happen this way.

Post-LLM I am absolutely certain that they will happen exactly that way.

I am not sure what LLM integrations have to do with engineering anymore, or why it makes sense to essentially put all your company's infrastructure into external control. And that is not even scratching the surface with the lack of reproducibility at every single step of the way.

It "somehow works" isn't engineering.

zeristor · 20 days ago
“Somehow Palpatine survived” broke Star Wars, although no AI was used in that.
BiraIgnacio · 20 days ago
When I look at how AI and LM systems, sw, platforms, have been built over the last decade (and are still being built), I can't help but to think that what really every mattered was the response the system produced.

Never mind the quality or if it's even going to work in production.

And maybe that's all that's needed, I don't really know.

I'm sure that's just me being the old curmudgeon of a software engineer I am, wishing people thought about more than one user using a system and 2 engineers supporting it.

mewpmewp2 · 20 days ago
> It "somehow works" isn't engineering.

Consider this - everything will "somehow work" if the system has been there for generations and is complex enough that no single human brain can keep everything about it in the brain at any given time.

It is easy to keep a system high quality, well maintained, well understood for a year with a small team, but imagine doing that for 100+ years with a system constantly evolving in complexity with generations of maintainers, people being rotated.

wredcoll · 21 days ago
> It "somehow works" isn't engineering.

But it sure is fast.

withinboredom · 21 days ago
For someone who isn't a trek fan -- can you elaborate on this?
bdangubic · 20 days ago
You just described every company and every system before we had llms…
cle · 21 days ago
I don't understand this criticism by the author. MCP supports JSON Schema, and server responses must conform to the schema. If the schema requires an ISO-8601 timestamp (ex by specifying a "date" format in the schema) but the server sends a Unix epoch timestamp, then it is violating the protocol.

The author even later says that MCP supports JSON Schema, but also claims "you can't generate type-safe clients". Which is plainly untrue, there exist plenty of JSON Schema code generators.

ohdeargodno · 21 days ago
Except that any properly written software will respond to protocol and schema violations by throwing an error.

Claude will happily cast your int into a 2023 Toyota Yaris and keep on hallucinating things.

dboreham · 20 days ago
imho it's a fantasy to expect type safe protocols except in the case that both client and server are written in the same (type safe) language. Actually even that doesn't work. What language actually allows a type definition for "ISO-8601 timestamp" that's complete? Everything ends up being some construction of strings and numbers, and it's often not possible to completely describe the set of valid values except by run-time checking, certainly beyond trivial cases like "integer between 0 and 10".
jongjong · 20 days ago
At its core, the article was just ramblings from someone being upset that LLMs didn't make things more complicated so that they could charge more billable hours to solve invented corporate problems... Which some people built their career on.

The merchants of complexity are disappointed. It turns out that even machines don't care for 'machine-readable' formats; even the machines prefer human-readable formats.

The only entities on this planet who appreciate so-called 'machine-readability' are bureaucrats; and they like it for the same reason that they like enterprise acronyms... Literally the opposite of readability.

avereveard · 21 days ago
MCP focuses on transport and managing context and doesn't absolve the user for sensibly implementing the interface (i.e. defining a schema and doing schema validation)

this is like saying "HTTP doesn't do json validation", which, well, yeah.

tomrod · 21 days ago
We already have PEBKAC - problem exists between chair and keyboard.

LLMs are basically automating PEBKAC

oblio · 21 days ago
We keep repeating this.

When desktop OSes came out, hardware resources were scarce so all the desktop OSes (DOS, Windows, MacOS) forgot all the lessons from Unix: multi user, cooperative multitasking, etc. 10 years later PC hardware was faster than workstations from the 90s yet we're still stuck with OSes riddled with limitations that stopped making sense in the 80s.

When smartphones came out there was this gold rush and hardware resources were scarce so OSes (iOS, Android) again forgot all the lessons. 10 years later mobile hardware was faster than desktop hardware from the 00s. We're still stuck with mistakes from the 00s.

AI basically does the same thing. It's all lead by very bright 20 and 30 year olds that weren't even born when Windows was first released.

Our field is doomed under a Cascade of Attention-Deficit Teenagers: https://www.jwz.org/doc/cadt.html (copy paste the link).

It's all gold rushes and nobody does Dutch urban infrastructure design over decades. Which makes sense as this is all driven by the US, where long term plan I is anathema.

lovich · 20 days ago
Our economic system punishes you for being born later, unless you manage to flip the table in terms of the status quo in the economy.

Of course this keeps happening

pstoll · 20 days ago
> I can practically see it playing out now: there is some huge incident of some kind, in some system or service with an MCP component somewhere, with some elaborate post-mortem revealing that some MCP server somewhere screwed up

Already happening.

https://www.infosecurity-magazine.com/news/atlassian-ai-agen...

hinkley · 21 days ago
> In healthcare, patient data types get coerced incorrectly, potentially leading to wrong medication dosing recommendations.

May have changed, but unlikely. I worked with medical telemetry as a young man and it was impressed upon me thoroughly how important parsing timestamps correctly was. I have a faint memory, possibly false, of this being the first time I wrote unit tests (and without the benefit of a test framework).

We even accounted for lack of NTP by recalculating times off of the timestamps I. Their message headers.

And the reasons I was given were incident review as well as malpractice cases. A drug administered three seconds before a heart attack starts is a very different situation than one administered eight seconds after the patient crashed. We saw recently with the British postal service how lives can be ruined by bad data, and in medical data a minute is a world of difference.

deathanatos · 21 days ago
> May have changed, but unlikely. I worked with medical telemetry as a young man and it was impressed upon me thoroughly how important parsing timestamps correctly was.

I also work in healthcare, and we've seen HL7v2 messages with impossible timestamps. (E.g., in the spring-forward gap.)

jongjong · 20 days ago
To me, the article was just rambling about all sorts of made up issues which only exist in the minds of people who never spent any time outside of corporate environments... A lot of 'preventative' ideas which make sense in some contexts but are mis-applied in different contexts.

The stuff about type validation is incorrect. You don't need client-side validation. You shouldn't be using APIs you don't trust as tools and you can always add instructions about the LLM's output format to convert to different formats.

MCP is not the issue. The issue is that people are using the wrong tools or their prompts are bad.

If you don't like the format of an MCP tool and don't want to give formatting instructions the LLMs, you can always create your own MCP service which outputs data in the correct format. You don't need the coercion to happen on the client side.

lowbloodsugar · 21 days ago
I’ve been successfully using AI for several months now, and there’s still no way I’d trust it to execute trades, or set the dose on an XRay machine. But startups gonna start. Let them.
Squakie · 20 days ago
I can offer a hacking/penetrarion testing perspective to this as a security researcher at a security consultint firm: this type of hallucination and trust is one of the largest things we exploit in our new LLM testing service. Overly agentic systems (one of the top 10 OWASP LLM vulns) is the most profound and commonly exploited issue that we've been able to leverage.

If we can get an internal, sensitive-data-handling agent to ingest a crafted prompt, either via direct prompt injection against a more abstract “parent” agent, or by tainting an input file/URL it’s told to process, we can plant what I have internally coined an “unfolding injection.”

The injection works like a parasitic goal, it doesn’t just trick one agent, it rewrites the downstream intent. As the orchestrator routes tasks to other agents, each one treats the tainted instructions as legitimate and works toward fulfilling them.

Because many orchestrations re-summarize, re-plan, or synthesize goals between steps, the malicious instructions can actually gain fidelity as they propagate. By the time they reach a sensitive action (exfiltration, privilege escalation, external calls), there’s no trace of the original “weird” wording, just a confidently stated, fully-integrated sub-goal.

It’s essentially a supply-chain attack on the orchestration layer: you compromise one node in the agent network, and the rest “help” you without realizing it. Without explicit provenance tracking and policy enforcement between agents, this kind of unfolding injection is almost trivial to pull off, and we've been able to compromise entire environments based on the information the agentic system provided us, or just gave us either a bind or reverse shell in the case it has cli access and ability to figure out its own network constraints.

SSRF has been making a HUGE return in agentic systems, and Im sad defcon and black hat didnt really have many talks on this subject this year, because it is a currently evolving security domain and entirely new method of exploitation. The entire point of agentic systems is non determinism, but it also makes it a security nightmare. As a researcher though, this is basically a gold mine of all sorts of new vulnerabilities we'll be seeing. If you work as a bugbounty hunter and see a new listing for an AI company I can almost assuredly say you can get a pretty massive payout just by exploiting the innate trust between agents and the internal tools they are leveraging. Even if you dont have the architecture docs of the agentic system you can likely prompt inject the initial task enough to taint the further agents to have them list out the orchestration flow by creatively adjusting your prompt for different types of orchestration and how the company might be doing prompt engineering on the agents persona and task its designed to work on and then submit report on to parent agent, and the limited input validation between them.

beefnugs · 17 days ago
Real security is nonsense to vibe coders
throwawaymaths · 21 days ago
i mean isnt all this stuff up to the mcp author to return a reasonable error to the agent and ask for it to repeat the call with amendments to the json?
dotancohen · 21 days ago
Yes. And this is where culture comes in. The culture of discipline of the C++ and the JavaScript communities are at extreme odds of the spectrum. The concern here is that the culture of interfacing with AI tools, such as MCP, is far closer to the discipline of the JavaScript community than to the C++ community.
dragonwriter · 20 days ago
> i mean isnt all this stuff up to the mcp author

Mostly, no. Whether its the client sending (statically) bad data or the server returning (statically) bad data, schema validation on the other end (assuming somehow it is allowed by the toolchain on the sending end) should reject it before it gets to the custom code of the MCP server or MCP client.

For arguments that are the right type but wrong because of the state of the universe, yes, the server receiving it should send a useful error message back to the client. But that's a different issue.

stouset · 21 days ago
This is no different than the argument that C is totally great as long as you just don’t make mistakes with pointers or memory management or indexing arrays.

At some point we have to decide as a community of engineers that we have to stop building tools that are little more than loaded shotguns pointed at our own feet.

cwilkes · 21 days ago
This implies that the input process did a check when it imported the data from somewhere else.

GIEMGO garbage in even more garbage out

nativeit · 21 days ago
What's your point? It's up to a ship's captain to keep it afloat, doesn't mean the hundreds of holes in the new ship's hull aren't relevant.
GeneralMayhem · 21 days ago
> MCP promises to standardize AI-tool interactions as the “USB-C for AI.”

Ironically, it's achieved this - but that's an indictment of USB-C, not an accomplishment of MCP. Just like USB-C, MCP is a nigh-universal connector with very poorly enforced standards for what actually goes across it. MCP's inconsistent JSON parsing and lack of protocol standardization is closely analogous to USB-C's proliferation of cable types (https://en.wikipedia.org/wiki/USB-C#Cable_types); the superficial interoperability is a very leaky abstraction over a much more complicated reality, which IMO is worse than just having explicitly different APIs/protocols.

cnst · 21 days ago
I'd like to add that the culmination of USB-C failure was Apple's removal of USB-A ports from the latest M4 Mac mini, where an identical port on the exact same device, now has vastly different capabilities, opaque to the final user of the system months past the initial hype on the release date.

Previously, you could reasonably expect a USB-C on a desktop/laptop of an Apple Silicon device, to be USB4 40Gbps Thunderbolt, capable of anything and everything you may want to use it for.

Now, some of them are USB3 10Gbps. Which ones? Gotta look at the specs or tiny icons, I guess?

Apple could have chosen to have the self-documenting USB-A ports to signify the 10Gbps limitation of some of these ports (conveniently, USB-A is limited to exactly 10Gbps, making it perfect for the use-case of having a few extra "low-speed" ports at very little manufacturing cost), but instead, they've decided to further dilute the USB-C brand. Pure innovation!

With the end user likely still having to use a USB-C to USB-A adapters anyways, because the majority of thumb drives, keyboards and mice, still require a USB-A port — even the USB-C ones that use USB-C on the kb/mice itself. (But, of course, that's all irrelevant because you can always spend 2x+ as much for a USB-C version of any of these devices, and the fact that the USB-C variants are less common or inferior to USB-A, is of course irrelevant when hype and fanaticism are more important than utility and usability.)

mafuy · 19 days ago
As far as I know, please correct me if I'm wrong, the USB spec does not allow USB-C to C cables at all. The host side must always be type A. This avoids issues like your cellphone power supplying not just your headphones but also your laptop.
afeuerstein · 21 days ago
Yeah, I loughed out loud when I read that line. Mission accomplished, I guess?
rickcarlino · 21 days ago
> SOAP, despite its verbosity, understood something that MCP doesn’t

Unfortunately, no one understood SOAP back.

(Additional context: Maintaining a legacy SOAP system. I have nothing good to say about SOAP and it should serve as a role model for no one)

jchw · 21 days ago
Agreed. In practice, SOAP was a train wreck. It's amazing how overly complicated they managed to make concepts that should've been simple, all the way down to just XML somehow being radically more complex than it looks to the wacky world of ill-defined standards for things like WSDLs and weird usage of multi-part HTTP and, to top it all off, it was all for nothing, because you couldn't guarantee that a SOAP server written in one language would be interoperable with clients in other languages. (I don't remember exactly what went wrong, but I hit issues trying to use a SOAP API powered by .NET from a Java client. I feel like that should be a pretty good case!)

It doesn't take very long for people to start romanticizing things as soon as they're not in vogue. Even when the painfulness is still fresh in memory, people lament over how stupid new stuff is. Well I'm not a fan of schemaless JSON APIs (I'm one of those weird people that likes protobufs and capnp much more) but I will take 50 years of schemaless JSON API work over a month of dealing with SOAP again.

chasd00 · 21 days ago
It’s been a while but isn’t soap just xml over http-post? Seems like all the soap stuff I’ve done is just posting lots of xml and getting lots of xml back.

/“xml is like violence, if it’s not working just use more!”

cyberax · 21 days ago
This is a very hilarious but apt SOAP description: https://harmful.cat-v.org/software/xml/soap/simple

And I actually like XML-based technologies. XML Schema is still unparalleled in its ability to compose and verify the format of multiple document types. But man, SOAP was such a beast for no real reason.

Instead of a simple spec for remote calls, it turned into a spec that described everything and nothing at the same time. SOAP supported all kinds of transport protocols (SOAP over email? Sure!), RPC with remote handles (like CORBA), regular RPC, self-describing RPC (UDDI!), etc. And nothing worked out of the box, because the nitty-gritty details of authentication, caching, HTTP response code interoperability and other "boring" stuff were just left as an exercise to the reader.

AnotherGoodName · 21 days ago
I'll give a different viewpoint and it's that I hate everything about XML. In fact one of the primary issues with SOAP was the XML. It never worked well across SOAP libraries. Eg. The .net and Java SOAP libraries have huge threads on stackoverflow "why is this incompatible" and a whole lot of needing to very tightly specify the schema. To the point it was a flaw; it might sound reasonable to tightly specify something but it got to the point there were no reasonable common defaults hence our complaints about SOAP verbosity and the work needed to make it function.

Part of this is the nature of XML. There's a million ways to do things. Should some data be parsed as an attribute of the tag or should it be another tag? Perhaps the data should be in the body between the tags? HTML, based on XML, has this problem; eg. you can seriously specify <font face="Arial">text</font> rather than have the font as a property of the wrapping tag. There's a million ways to specify everything and anything and that's why it makes a terrible data parsing format. The reader and writer must have the exact same schema in mind and there's no way to have a default when there's simply no particular correct way to do things in XML. So everything had to be very very precisely specified to the point it added huge amounts of work when a non-XML format with decent defaults would not have that issue.

This become a huge problem for SOAP and why i hate it. Every implementation had different default ways of handling even the simplest data structure passing between them and were never compatible unless you took weeks of time to specify the schema down to a fine grained level.

In general XML is problematic due to the lack of clear canonical ways of doing pretty much anything. You might say "but i can specify it with a schema" and to that i say "My problem with XML is that you need a schema for even the simplest use case in the first place".

pjmlp · 21 days ago
I have plenty of good stuff to say, especially since REST (really JSON-RPC in practice), and GraphQL, seem to always being catching up to features the whole SOAP and SOA ecosystems already had.

Unfortunately as usual when a new technology cycle comes, everything gets thrown away, including the good parts.

SoftTalker · 21 days ago
I have found that any protocol whose name includes the word "Simple" is anything but. So waiting for SMCP to appear....
yjftsjthsd-h · 21 days ago
I dunno, SMTP wasn't bad last time I had to play with it. In actual use it wasn't entirely trivial, but most of that happened at layers that weren't really the mail transfer protocol's fault (SPF et al.). Although, I'm extremely open to that being one exception in flood of cases where you are absolutely correct:)
sirtaj · 21 days ago
I recall two SOAP-based services refusing to talk to each other because one nicely formatted the XML payload and the other didn't like that one bit. There is a lot we lost when we went to json but no, I don't look back at that stuff with any fondness.
divan · 21 days ago
No, letter S in MCP is reserved for "Security")
ohdeargodno · 21 days ago
Parsing SOAP responses on memory limited devices is such a fun experiment in just how miserable your life can get.
zaphar · 20 days ago
Granted your soap library probably did the wrong thing there but you could do surprising low memory xml parsing with a sax event based parser. I remember taking the runtime of full dom parsers down from hours to minutes by rewriting them as sax parsers.
hinkley · 21 days ago
Ironically what put me entirely off SOAP was a tech presentation on SOAP.

Generally it worked very well when both ends were written in the same programming language and was horseshit if they weren’t. No wonder Microsoft liked SOAP so much.

rickcarlino · 21 days ago
And that begs the question why have a spec at all if it is not easily interoperable? If the specification is impossible to implement and understand, just make it language specific and call it a reference implementation. You can reinvent the wheel and it will be round.
mac-mc · 21 days ago
You're missing the most significant lesson of all that MCP knew. That all of those featureful things are way too overcomplicated for most places, so they will gravitate to the simple thing. It's why JSON over HTTP blobs is king today.

I've been on the other side of high-feature serialization protocols, and even at large tech companies, something like migrating to gRPC is a multi-year slog that can even fail a couple of times because it asks so much of you.

MCP, at its core, is a standardization of a JSON API contract, so you don't have to do as much post-training to generate various tool calling style tokens for your LLM.

bobbiechen · 20 days ago
Gall’s Law: all complex systems that work evolved from simpler systems that worked.
prerok · 21 days ago
What are HTTP blobs?

I think you meant that is why JSON won instead of XML?

mac-mc · 21 days ago
JSON-over-HTTP blobs. Or blobs of schemaless json.

Not just XML, but a lot of other serialization formats and standards, like SOAP, protobuf in many cases, yaml, REST, etc.

People say REST won, but tell me how many places actually implement REST or just use it as a stand-in term for casual JSON blobs to HTTP URLs?

zorked · 21 days ago

  CORBA emerged in 1991 with another crucial insight: in heterogeneous environments, you can’t just “implement the protocol” in each language and hope for the best. The OMG IDL generated consistent bindings across C++, Java, Python, and more, ensuring that a C++ exception thrown by a server was properly caught and handled by a Java client. The generated bindings guaranteed that all languages saw identical interfaces, preventing subtle serialization differences.
Yes, CORBA was such a success.

cyberax · 21 days ago
CORBA got a lot of things right. But it was unfortunately a child of the late 80-s telecom networks mixed with OOP-hype.

So it baked in core assumptions that the network is transparent, reliable, and symmetric. So you could create an object on one machine, pass a reference to it to another machine, and everything is supposed to just work.

Which is not what happens in the real world, with timeouts, retries, congested networks, and crashing computers.

Oh, and CORBA C++ bindings had been designed before the STL was standardized. So they are a crawling horror, other languages were better.

cortesoft · 21 days ago
Yeah, the modern JSON centered API landscape came about as a response to failures of CORBA and SOAP. It didn’t forget the lessons of CORBA, it rejected them.
pjmlp · 21 days ago
And then rediscovered why we need schemas in CORBA and SOAP, or orchestration engines.
cyberax · 21 days ago
And now we're getting a swing back to sanity. OpenAPI is an attempt to formally describe the Wild West of JSON-based HTTP interfaces.

And its complexity and size now are rivaling the specs of the good old XML-infused times.

sudhirb · 21 days ago
I've worked somewhere where CORBA was used very heavily and to great effect - though I suspect the reason for our successful usage was that one of the senior software engineers worked on CORBA directly.
hinkley · 21 days ago
I applied for a job at AT&T using CORBA around 1998 and I think that’s the last time I encountered it other than making JDK downloads slower.

Didn’t get that job, one of the interviewers asked me to write concurrent code, didn’t like my answer, but his had a race condition in it and I was unsuccessful in convincing him he was wrong. He was relying on preemption not occurring on a certain instruction (or multiprocessing not happening). During my tenure at the job I did take the real flaws in the Java Memory Model would come out and his answer became very wrong and mine only slightly.

antonymoose · 21 days ago
To be charitable, you can look at a commercially unsuccessful project and appreciate its technical brilliance.
drweevil · 20 days ago
Just an interesting bit of trivia, the Large Hadron Collider uses/used (don't know if it still does) CORBA in its distributed control system. (On the control system I worked on we use Sun RPC, which was fine as things go but doesn't have the language support that CORBA has. We used a separate SOAP interface to the system to allow for languages such as Python. Today I'd use gRPC, or the BEAM.)

On a more general note, I see in many critical comments here what I perceive to be a category error. Using JSON to pass data between web client and server, even in more complex web apps, is not the same thing as supporting two-way communications between autonomous software entities that are tasked to do something, perhaps something critical. There could be millions of these exchanges in some arbitrarily short time period, thus any possibility of errors is multiplied accordingly, and the effect any error could cascade if it does not fail early. I really don't believe this is a case where "worse is better." To use an analogy, yes everyday English is a versatile language that works great for most use cases; but when you really need to nail things down, with no tolerance for ambiguity, you get legalese or some other jargon. Or CORBA, or gRPC, etc.

SillyUsername · 21 days ago
MCP is flawed but it learnt one thing correctly from years of RPC - complexity is the biggest time sink and holds back adoption in deference to simpler competing standards (cf XML vs JSON)

- SOAP - interop needs support of DOC or RPC based between systems, or a combination, XML and schemas are also horribly verbose.

- CORBA - libraries and framework were complex, modern languages at the time avoided them in deference to simpler standards (e.g. Java's Jini)

- GPRC - designed for speed, not readability, requires mappings.

It's telling that these days REST and JSON (via req/resp, webhooks, or even streaming) are the modern backbone of RPC. The above standards either are shoved aside or for GPRC only used where extreme throughput is needed.

Since REST and JSON are the plat du jour, MCP probably aligns with that design paradigm rather than the dated legacy protocols.

w10-1 · 20 days ago
> these days REST and JSON (via req/resp, webhooks, or even streaming) are the modern backbone of RPC

No, they're the medium of the web.

The author is specifically addressing enterprise integration into business workflows - not showing stuff in a browser.

SillyUsername · 20 days ago
How is this browser specific or mentioned to be browser? The technologies can be purely "enterprise integration" of backend services. When was swagger (openapi) for example forbidden to be used for RPC? E.g. an endpoint that doesn't just support a crud op but take an event with an operation to execute?
mockingloris · 21 days ago
I read this thrice: ...When OpenAI bills $50,000 for last month’s API usage, can you tell which department’s MCP tools drove that cost? Which specific tool calls? Which individual users or use cases?...

It seems to be a game of catch up for most things AI. That said, my school of thought is that certain technologies are just too big for them to be figured out early on - web frameworks, blockchain, ...

- the gap starts to shrink eventually. With AI, we'll just have to keep sharing ideas and caution like you have here. Such very interesting times we live in.

zwaps · 21 days ago
The author seems to fundamentally misunderstand how MCPs are going to be used and deployed.

This is really obvious when they talk about tracing and monitoring, which seem to be the main points of criticism anyway.

They bemoan that they cant trace across MCP calls, assuming somehow there would be a person administering all the MCPs. Of course each system has tracing in whatever fashion fits its system. They are just not the same system, nor owned by the same people let alone companies.

Same as monitoring cost. Oh, you can’t know who racked up the LLM costs? Well of course you can, these systems are already in place and there are a million of ways to do this. It has nothing to do with MCP.

Reading this, I think its rather a blessing to start fresh and without the learnings of 40 years of failed protocols or whatever

oblio · 21 days ago
> without the learnings of 40 years of failed protocols or whatever

1. Lessons.

2. Fairly sure all of Google is built on top of protobuf.