Readit News logoReadit News
dend · a year ago
Coordinator of the authorization RFC linked in this post[1].

The protocol is in very, very early stages and there are a lot of things that still need to be figured out. That being said, I can commend Anthropic on being very open to listening to the community and acting on the feedback. The authorization spec RFC, for example, is a coordinated effort between security experts at Microsoft (my employer), Arcade, Hellō, Auth0/Okta, Stytch, Descope, and quite a few others. The folks at Anthropic set the foundation and welcomed others to help build on it. It will mature and get better.

[1]: https://github.com/modelcontextprotocol/modelcontextprotocol...

magicalhippo · a year ago
A nice, comprehensive yet accessible blog post about it can be found here[1], got submitted earlier[2] but didn't gain traction.

[1]: https://aaronparecki.com/2025/04/03/15/oauth-for-model-conte...

[2]: https://news.ycombinator.com/item?id=43620496

dend · a year ago
Great news - Aaron has been a core reviewer and contributor to the aforementioned RFC.
martypitt · a year ago
Impressive to see this level of cross-org coordination on something that appears to be maturing at pace (compared to other consortium-style specs/protocol I've seen attempted)

Congrats to everyone.

sshh12 · a year ago
Awesome! Thanks for your work on this.
dend · a year ago
Can't take any credit - it's a massive effort across many folks much smarter than me.
Y_Y · a year ago
This reminds me of something Adam Smith said in The Wealth of Nations:

"People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."

Ymmv, but I cannot image that this "innovation" will result in a better outcome for the general public.

EigenLord · a year ago
The author makes good general points but seems to be overloading MCP's responsibilities imo. My understanding of MCP is that it just provides a ready-made "doorway" for LLMs to enter and interact with externally managed resources. It's a bridge or gateway. So is it really MCP's fault that it:

>makes it easier to accidentally expose sensitive data.

So does the "forward" button on emails. Maybe be more careful about how your system handles sensitive data. How about:

>MCP allows for more powerful prompt injections.

This just touches on wider topic of only working with trusted service providers that developers should abide by generally. As for:

>MCP has no concept or controls for costs.

Rate limit and monitor your own usage. You should anyway. It's not the road's job to make you follow the speed limit.

Finally, many of the other issues seem to be more about coming to terms with delegating to AI agents generally. In any case it's the developer's responsibility to manage all these problems within the boundaries they control. No API should have that many responsibilities.

TeMPOraL · a year ago
Yeah. That's another in a long line of MCP articles and blogposts that's been coming up over the past few weeks, that can be summarized as "breaking news: this knife is sharp and can cut someone if you swing it at people, it can cut you if you hold it the wrong way, and is not a toy suitable for small children".

Well, yes. A knife cuts things, it's literally its only job. It will cut whatever you swing it at, including people and things you didn't intend to - that's the nature of a general-purpose cutting tool, as opposed to e.g. safety razor or plastic scissors for small children, which are much safer, but can only cut few very specific things.

Now, I get it, young developers don't know that knives and remote access to code execution on a local system are both sharp tools and need to be kept out of reach of small children. But it's one thing to remind people that the tool needs to be handled with care; it's another to blame it on the tool design.

Prompt injection is a consequence of the nature of LLMs, you can't eliminate it without degrading capabilities of the model. No, "in-band signaling" isn't the problem - "control vs. data" separation is not a thing in nature, it's designed into systems, and what makes LLMs useful and general is that they don't have it. Much like people, by the way. Remote MCPs as a Service are a bad idea, but that's not the fault of the protocol - it's the problem of giving power to third parties you don't trust. And so on.

There is technical and process security to be added, but that's mostly around MCP, not in it.

Joker_vD · a year ago
Well. To repurpose you knife analogy, they (we?) duct-taped a knife on an erratic, PRNG-controlled roomba and now discover that people are getting their Achilles tendons sliced. Technically, it's all functioning exactly as intended, but: this knife was designed specifically to be attached to such roombas, and apparently nobody stopped to think whether it was such a great idea.

And admonishments of "don't use it when people are around, but if you do, it's those people's fault when they get cut: they should've be more careful and probably wore some protective foot-gear" while technically accurate, miss the bigger problem. That is, that somebody decided to strap a sharp knife to a roomba and then let it whiz around in the space full of people.

Mind you, we have actual woodcutting table saws with built-in safety measures: they instantly stop when they detect contact with human skin. So you absolutely can have safe knives. They just cost more, and I understand that most people value (other) people's health and lives quite cheaply indeed, and so don't bother buying/designing/or even considering such frivolities.

skybrian · a year ago
The problem with the “knife is sharp” argument is that it’s too generic. It can be deployed against most safety improvements. The modern world is built on driving accident rates down to near-zero. That’s why we have specialized tools like safety razors. Figuring out what to do to reduce accident rates is what postmortems are for - we don’t just blame human error, we try to fix things systematically.

As usual, the question is what counts as a reasonable safety improvement, and to do that we would need to go into the details.

I’m wondering what you think of the CaMeL proposal?

https://simonwillison.net/2025/Apr/11/camel/#atom-everything

noodletheworld · a year ago
Some of the other issues are less important than others, but even if you accept “you have to take responsibility for yourself”, let me quote the article:

> As mentioned in my multi-agent systems post, LLM-reliability often negatively correlates with the amount of instructional context it’s provided. This is in stark contrast to most users, who (maybe deceived by AI hype marketing) believe that the answer to most of their problems will be solved by providing more data and integrations. I expect that as the servers get bigger (i.e. more tools) and users integrate more of them, an assistants performance will degrade all while increasing the cost of every single request. Applications may force the user to pick some subset of the total set of integrated tools to get around this.

I will rephrase it in stronger terms.

MCP does not scale.

It cannot scale beyond a certain threshold.

It is Impossible to add an unlimited number of tools to your agents context without negatively impacting the capability of your agent.

This is a fundamental limitation with the entire concept of MCP and needs addressing far more than auth problems, imo.

You will see posts like “MCP used to be good but now…” as people experience the effects of having many MCP servers enabled.

They interfere with each other.

This is fundamentally and utterly different from installing a package in any normal package system, where not interfering is a fundamental property of package management in general.

Thats the problem with MCP.

As an idea it is different to what people trivially expect from it.

weird-eye-issue · a year ago
I think this can largely be solved with good UI. For example, if an MCP or tool gets executed that you didn't want to get executed, the UI should provide an easy way to turn it off or to edit the description of that tool to make it more clear when it should be used and should not be used by the agent.

Also, in my experience, there is a huge bump in performance and real-world usage abilities as the context grows. So I definitely don't agree about a negative correlation there, however, in some use cases and with the wrong contexts it certainly can be true.

TeMPOraL · a year ago
Simple: if the choice is getting overwhelming to the LLM, then... divide and conquer - add a tool for choosing tools! Can be as simple as another LLM call, with prompt (ugh, "agent") tasked strictly with selecting a subset of available tools that seem most useful for the task at hand, and returning that to "parent"/"main" "agent".

You kept adding more tools and now the tool-master "agent" is overwhelmed by the amount of choice? Simple! Add more "agents" to organize the tools into categories; you can do that up front and stuff the categorization into a database and now it's a rag. Er, RAG module to select tools.

There are so many ways to do it. Using cheaper models for selection to reduce costs, dynamic classification, prioritizing tools already successfully applied in previous chat rounds (and more "agents" to evaluate if a tool application was successful)...

Point being: just keep adding extra layers of indirection, and you'll be fine.

empath75 · a year ago
"Sequential thinking" is one that I tried recently because so many people recommend it, and I have never, ever, seen the chatbot actually do anything but write to it. It never follows up any of it's chains of thoughts or refers to it's notes.
kiitos · a year ago
> It is Impossible to add an unlimited number of tools to your agents context without negatively impacting the capability of your agent.

Huh?

MCP servers aren't just for agents, they're for any/all _clients_ that can speak MCP. And capabilities provided by a given MCP server are on-demand, they only incur a cost to the client, and only impact the user context, if/when they're invoked.

Spivak · a year ago
I think the author's point is that the architecture of MCP is fundamentally extremely high trust between not only your agent software and the integrations, but the (n choose 2) relationships between all of them. We're doing the LLM equivalent of loading code directly into our address space and executing it. This isn't a bad thing, dlopen is incredibly powerful with this power, but the problem being solved with MCP just isn't that level of trust.

The real level of trust is on the order OAuth flows where the data provider has a gun sighted on every integration. Unless something about this protocol and it's implementations change I expect every MCP server to start doing side-channel verification like getting an email "hey your LLM is asking to do thing, click the link to approve." Where in this future it severely inhibits the usefulness of agents in the same vein as Apple's "click the notification to run this automation."

zoogeny · a year ago
Sure, at first, until the users demand a "always allow this ..." kind of prompt and we are back in the same place.

A lot of these issues seem trivial when we consider having a dozen agents running on tens of thousands of tokens of context. You can envision UIs that take these security concerns into account. I think a lot of the UI solutions will break down if we have hundreds of agents each injecting 10k+ tokens into a 1m+ context. The problems we are solving for today won't hold as LLMs continue to increase in size and complexity.

ZiiS · a year ago
> Rate limit and monitor your own usage. You should anyway. It's not the road's job to make you follow the speed limit.

A better metaphor is the car, not the road. It is legally required to accurately tell you your speed and require deliberate control to increase it.

Even if you stick to a road; whoever made the road is required to research and clearly post speed limits.

jacobr1 · a year ago
Exactly. It is pretty common for APIs to actually signal this too. Headers to show usage limits or rates. Good error codes (429) with actual documentation on backoff timeframes. If you use instrument your service to respect read and respect the signals it gets, everything moves smoother. Backing stuff like that back into the MCP spec or at least having common conventions that are applied on top will be very useful. Similarly for things like tracking data taint, auth, tracing, etc ... Having a good ecosystem makes everything play together much nicer.
TeMPOraL · a year ago
Also extending the metaphor, you can make a road that controls where you go and makes sure you don't stray from it (whether by accident or on purpose): it's called rail, and its safety guarantees come with reduced versatility.

Don't blame roads for not being rail, when you came in a car because you need the flexibility that the train can't give you.

fsndz · a year ago
why would anyone accept to expose sensitive data so easily with MCP ? also MCP does not make AI agents more reliable, it just gives them access to more tools, which can decrease reliability in some cases:https://medium.com/thoughts-on-machine-learning/mcp-is-mostl...
Eisenstein · a year ago
People accept lots of risk in order to do things. LLMs offer so much potential that people want to use so they will try, and it is but through experience that we can learn to mitigate any downsides.
sshh12 · a year ago
Totally agree, hopefully it's clear closer to the end that I don't actually expect MCP to solve and be responsible for a lot of this. More so MCP creates a lot of surface area for these issues that app developers and users should be aware of.
peterlada · a year ago
Love the trollishness/carelessness of your post. Exactly as you put it: "it is not the road's job to limit your speed".

Like a bad urban planner building a 6 lane city road with the 25mph limit and standing there wondering why everyone is doing 65mph in that particular stretch. Maybe sending out the police with speed traps and imposing a bunch of fines to "fix" the issue, or put some rouge on that pig, why not.

Someone · a year ago
> Rate limit and monitor your own usage. You should anyway. It's not the road's job to make you follow the speed limit.

In some sense, urban planners do design roads to make you follow the speed limit. https://en.wikipedia.org/wiki/Traffic_calming:

“Traffic calming uses physical design and other measures to improve safety for motorists, car drivers, pedestrians and cyclists. It has become a tool to combat speeding and other unsafe behaviours of drivers”

reliabilityguy · a year ago
> It's not the road's job to make you follow the speed limit.

Good road design makes it impossible to speed.

Deleted Comment

pgt · a year ago
MCP is just a transport + wire format with request/response lifecycle and most importantly: tool-level authorization.

The essay misses the biggest problem with MCP:

  1. it does not enable AI agents to functionally compose tools.

  2. MCP should not exist in the first place.
LLMs already know how to talk to every API that documents itself with OpenAPI specs, but the missing piece is authorization. Why not just let the AI make HTTP requests but apply authorization to endpoints? And indeed, people are wrapping existing APIs with thin MCP tools.

Personally, the most annoying part of MCP is the lack of support for streaming tool call results. Tool calls have a single request/response pair, which means long-running tool calls can't emit data as it becomes available – the client has to repeat a tool call multiple times to paginate. IMO, MCP could have used gRPC which is designed for streaming. Need an onComplete trigger.

I'm the author of Modex[^1], a Clojure MCP library, which is used by Datomic MCP[^2].

[^1]: Modex: Clojure MCP Library – https://github.com/theronic/modex

[^2]: Datomic MCP: Datomic MCP Server – https://github.com/theronic/datomic-mcp/

pgt · a year ago
Previous thoughts on MCP, which I won't rehash here:

- "MCP is a schema, not a protocol" – https://x.com/PetrusTheron/status/1897908595720688111

- "PDDL is way more interesting than MCP" – https://x.com/PetrusTheron/status/1897911660448252049

- "The more I learn about MCP, the less I like it" https://x.com/PetrusTheron/status/1900795806678233141

- "Upon further reflection, MCP should not exist" https://x.com/PetrusTheron/status/1897760788116652065

- "in every new language, framework or paradigm, there is a guaranteed way to become famous in that community" – https://x.com/PetrusTheron/status/1897147862716457175

I don't know if it's taboo to link to twitter, but I ain't gonna copypasta all that.

mdaniel · a year ago
I hadn't heard of PDDL before but to save others the click on x.com, this is the Xeet:

> This PDDL planning example is much more interesting than what MCP purports to be: https://en.wikipedia.org/wiki/Planning_Domain_Definition_Lan...

> Imagine a standard planning language for model interconnect that enables collaborative goal pursuit between models.

> Maybe I'll make one.

kiitos · a year ago
MCP is literally defined as a protocol.

It doesn't have anything to say about the transport layer, and certainly doesn't mandate stdio as a transport.

> The main feature of MCP is auth

MCP has no auth features/capabilities.

I think you're tilting at windmills here.

cruffle_duffle · a year ago
There are plenty of things out there that don’t use OpenAPI. In fact most things aren’t.

Even if the universe was all OpenAPI, you’d still need a lower level protocol to define exactly how the LLM reaches out of the box and makes the OpenAPI call in the first place. That is what MCP does. It’s the protocol for calling tools.

It’s not perfect but it’s a start.

pgt · a year ago
AI can read docs, Swagger, OpenAI and READMEs, so MCP adds nothing here. All you need is an HTTP client with authorization for endpoints.

E.g. in Datomic MCP[^1], I simply tell the model that the tool calls datomic.api/q, and it writes correct Datomic Datalog queries while encoding arguments as EDN strings without any additional READMEs about how EDN works, because AI knows EDN.

And AI knows HTTP requests, it just needs an HTTP client, i.e. we don't need MCP.

So IMO, MCP is an Embrace, Extend (Extinguish?) strategy by Anthropic. The arguments that "foundational model providers don't want to deal with integration at HTTP-level" are uncompelling to me.

All you need is an HTTP client + SSE support + endpoint authz in the client + reasonable timeouts. The API docs will do the rest.

Raw TCP/UDP sockets more dangerous, but people will expose those over MCP anyway.

[^1]: https://github.com/theronic/datomic-mcp/blob/main/src/modex/...

taeric · a year ago
I mean... you aren't wrong that OpenAPI doesn't have universal coverage. This is true. Neither did WSDL and similar things before it.

I'm not entirely clear on why it make sense to jump in with a brand new thing, though? Why not start with OpenAPI?

resters · a year ago
> 1. it does not enable AI agents to functionally compose tools.

Is there something about the OpenAI tool calling spec that prevents this?

pgt · a year ago
I haven't looked at the OpenAI tool calling spec, but the lack of return types in MCP, as reported by Erik Meijers, makes composition hard.

Additionally, the lack of typed encodings makes I/O unavoidable because the model has to interpret the schema of returned text values first to make sense of it before passing it as input to other tools. Makes it impossible to pre-compile transformations while you wait on tool results.

IMO endgame for MCP is to delete MCP and give AI access to a REPL with eval authorized at function-level.

This is why, in the age of AI, I am long dynamic languages like Clojure.

keithwhor · a year ago
I mean you don’t need gRPC. You can just treat all tool calls as SSEs themselves and you have streaming. HTTP is pretty robust.
pgt · a year ago
HTTP Server-Sent Events (SSE) does not natively support batched streaming with explicit completion notifications in its core specification.
serbuvlad · a year ago
This article reads less like a criticism of MCP, the internal technical details of which I don't know that much about, and they make the subject of but a part of the srticle, but a general criticism of the general aspect of "protocol to allow LLM to run actions on services"

A large problem in this article stems from the fact that the LLM may take actions I do not want it to take. But there are clearly 2 types of actions the LLM can take: those I want it to take on it's own, and those I want it to take only after prompting me.

There may come a time when I want the LLM to run a business for me, but that time is not yet upon us. For now I do not even want to send an e-mail generated by AI without vetting it first.

But the author rejects the solution of simply prompting the user because "it’s easy to see why a user might fall into a pattern of auto-confirmation (or ‘YOLO-mode’) when most of their tools are harmless".

Sure, and people spend more on cards than they do with cash and more on credit cards than they do on debit cards.

But this is a psychological problem, not a technological one!

jwpapi · a year ago
I have read 30 MCP articles now and I still don’t understand why we not just use API?
serverlessmania · a year ago
MCP allows LLM clients you don’t control—like Claude, ChatGPT, Cursor, or VSCode—to interact with your API. Without it, you’d need to build your own custom client using the LLM API, which is far more expensive than just using existing clients like ChatGPT or Claude with a $20 subscription and teaching them how to use your tools.

I built an MCP server that connects to my FM hardware synthesizer via USB and handles sound design for me: https://github.com/zerubeus/elektron-mcp.

jonfromsf · a year ago
But couldn't you just tell the LLM client your API key and the url of the API documentation? Then it could interact with the API itself, no?
12ian34 · a year ago
elektron user here. wow thank you :)

Deleted Comment

siva7 · a year ago
ChatGPT still doesn't support MCP. It really fell behind Google or Anthropic in the last months in most categories. Gemini pro blows o1 pro away.
nzach · a year ago
> why we not just use API

Did you meant to write "a HTTP API"?

I asked myself this question before playing with it a bit. And now I have a slightly better understanding, I think the main reason was created as a way to give access of your local resources (files, envvars, network access...) to your LLM. So it was designed to be something you run locally and the LLM has access.

But there is nothing preventing you making an HTTP call from a MCP server. In fact, we already have some proxy servers for this exact use-case[0][1].

[0] - https://github.com/sparfenyuk/mcp-proxy

[1] - https://github.com/adamwattis/mcp-proxy-server

throw310822 · a year ago
I'm not sure I get it too. I get the idea of a standard api to connect one or more external resources providers to an llm (each exposing tools + state). Then I need one single standard client-side connector to allow the llm to talk to those external resources- basically something to take care of the network calls or other forms of i/o in my local (llm-side) environment. Is that it?
lsaferite · a year ago
Sounds mostly correct. The standard LLM tool call 'shape' matches the MCP tool call 'shape' very closely. It's really just a simple standard to support connecting a tool to an agent (and by extension an LLM).

There are other aspects, like Resources, Prompts, Roots, and Sampling. These are all relevant to that LLM<->Agent<->Tools/Data integration.

As with all things AI right now, this is a solution to a current problem in a fast moving problem space.

yawnxyz · a year ago
I have an API, but I built an MCP around my API that makes it easier for something like Claude to use — normally something that's quite tough to do (giving special tools to Claude).
mehdibl · a year ago
Because you need mainly a bridge between the Function calling schema defined that you expose to the AI model so you can leverage them. The model need a gateway as API can't be used directly.

MCP core power is the TOOLS and tools need to translate to function calls and that's mainly what MCP do under the hood. Your tool can be an API, but you need this translation layer function call ==> Tool and MCP sits in the middle

https://platform.openai.com/docs/guides/function-calling

jasondigitized · a year ago
I played around with MCP this weekend and I agree. I just want to get a users X and then send X to my endpoint so I can do something with it. I don't need any higher level abstraction than that.

Deleted Comment

Deleted Comment

aoeusnth1 · a year ago
If you are a tool provider, you need a standard protocol for the AI agent frontends to be able to connect to your tool.
geysersam · a year ago
I think the commenter is asking "why can't that standard protocol be http and open api?"
kristoff200512 · a year ago
I think it's fine if you only need a standalone API or know exactly which APIs to call. But when users ask questions or you're unsure which APIs to use, MCP can solve this issue—and it can process requests based on your previous messages.
arthurcolle · a year ago
It is an API. You can implement all this stuff from scratch with raw requests library in python if you want. Its the idea of a standard around information interchange, specifically geared around agentic experiences like claude code, (tools like aider, previously, that are much worse) - its like a FastAPI web app framework around building stuff that can helps LLMs and VLMs and model wrapped software in ways that can speak over the network.

Basically like Rails-for-Skynet

I'm building this: https://github.com/arthurcolle/fortitude

pcarolan · a year ago
Right. And if you use OpenAPI the agent can get the api spec context it needs from /openapi.json.
idonotknowwhy · a year ago
I feel like this is being pushed to get more of the system controlled by the provider's side. After a few years, Anthropic, google, etc might start turning off the api. Similar to how Google made it very difficult to use IMAP / SMTP with Gmail
edanm · a year ago
It is an API. It's an API standardization for LLMs to interact with outside tools.
pizza · a year ago
I just want to mention something in a chat in 5 seconds instead of preparing the input data, sending it to the API, parsing the output for the answer, and then doing it all again for every subsequent message.
throw1290381290 · a year ago
Here's the kicker. It is an API.

Dead Comment

mlenhard · a year ago
One of the biggest issues I see, briefly discussed here, is how one MCP server tool's output can affect other tools later in the same message thread. To prevent this, there really needs to be sandboxing between tools. Invariant labs did this with tool descriptions [1], but I also achieved the same via MCP resource attachments[2]. It's a pretty major flaw exacerbated by the type of privilege and systems people are giving MCP servers access to.

This isn't necessarily the fault of the spec itself, but how most clients have implemented it allows for some pretty major prompt injections.

[1] https://invariantlabs.ai/blog/mcp-security-notification-tool... [2] https://www.bernardiq.com/blog/resource-poisoning/

cyanydeez · a year ago
Isn't this basically a lot of hand waving that ends up being isomorphic to SQL injection?

Thats what we're talking about? A bunch of systems cobbled together where one could SQL inject at any point and there's basically zero observability?

seanhunter · a year ago
Yes, and the people involved in all this stuff have also reinvented SQL injection in a different way in the prompt interface, since it's impossible[1] for the model to tell what parts of the prompt are trustworthy and what parts are tainted by user input, no matter what delimeters etc you try to use. This is because what the model sees is just a bunch of token numbers. You'd need to change how the encoding and decoding steps work and change how models are trained to introduce something akin to the placeholders that solve the sql injection problem.

Therefore it's possible to prompt inject and tool inject. So you could for example prompt inject to get a model to call your tool which then does an injection to get the user to run some untrustworthy code of your own devising.

[1] See the excellent series by Simon Willison on this https://simonwillison.net/series/prompt-injection/

mlenhard · a year ago
Yeah, you aren't far off with SQL injection comparison. That being said it's not really a fault of the MCP spec, more so with current client implementations of it.
jeswin · a year ago
> MCP servers can run (malicious code) locally.

I wrote an MCP Server (called Codebox[1]) which starts a Docker container with your project code mounted. It works quite well, and I've been using it with LibreChat and vscode. In my experience, Agents save 2x the time (over using an LLM traditionally) and is less typing, but at roughly 3x the cost.

The idea is to make the entire Unix toolset available to the LLM (such as ls, find), along with project specific tooling (such as typescript, linters, treesitter). Basically you can load whatever you want into the container, and let the LLM work on your project inside it. This can be done with a VM as well.

I've found this workflow (agentic, driven through a Chat based interface) to be more effective compared to something like Cursor. Will do a Show HN some time next week.

[1]: https://github.com/codespin-ai/codebox-js

jillesvangurp · a year ago
Any interpreter can run malicious code. Mostly the guidance is: don't run malicious code if you don't want it to run. The problem isn't the interpreter/tool but the entity that's using it. Because that's the thing that you should be (mis)-trusting.

The issue is two fold:

- models aren't quite trustworthy yet.

- people put a lot of trust in them anyway.

This friction always exist with security. It's not a technical problem that can or should be solved on the MCP side.

Part of the solution is indeed going to come from containerization. Give MCP agents access to what they need but not more. And part of it is going to come from some common sense and the tool UX providing better transparency into what is happening. Some of the better examples I've seen of Agentic tools work like you outline.

I don't worry too much about the cost. This stuff is getting useful enough that paying a chunk of what normally would go into somebody's salary actually isn't that bad of a deal. And of course cost will come down. My main worry is actually speed. I seem to spend a lot of time waiting for these tools to do their thing. I'd love this stuff to be a bit zippier.

jeswin · a year ago
> Give MCP agents access to what they need but not more.

My view is that you should give them (Agents) a computer, with a complete but minimal Linux installation - as a VM or Containerized. This has given me better results, because now it can say fetch information from the internet, or do whatever it wants (but still in the sandbox). Of course, depending on what you're working on, you might decide that internet access is a bad idea, or that it should just see the working copy, or allow only certain websites.

peterlada · a year ago
Let me give you some contrast here:

- employees are not necessarily trustworthy

- employers place a lot of trust in them anyway

sunpazed · a year ago
Let’s remind ourselves that MCP was announced to the world in November 2024, only 4 short months ago. The RFC is actively being worked on and evolving.
sealeck · a year ago
It's April 2025
marcellus23 · a year ago
Yes, and it's been about 4 and a half months since Nov 25, 2024.