For instance, when there is something wrong for MCP host, it query all data from database and transfer it to host, all data will be leaked.
It's hard to totally prevent this kind of problem when interacting with local data, But, Is there some actions to prevent this kind of situations for MCP?
It's not introducing new capabilities, just solving the NxM problem, hopefully leading to more tools being written.
(At least that's how I understand this. Am I far off?)
On tools specifically, we went back and forth about whether the other primitives of MCP ultimately just reduce to tool use, but ultimately concluded that separate concepts of "prompts" and "resources" are extremely useful to express different _intentions_ for server functionality. They all have a part to play!
It appears that clients retrieve prompts from a server to hydrate them with context only, to then execute/complete somewhere else (like Claude Desktop, using Anthropic models). The server doesn’t know how effective the prompt will be in the model that the client has access to. It doesn’t even know if the client is a chat app, or Zed code completion.
In the sampling interface - where the flow is inverted, and the server presents a completion request to the client - it can suggest that the client uses some model type /parameters. This makes sense given only the server knows how to do this effectively.
Given the server doesn’t understand the capabilities of the client, why the asymmetry in these related interfaces?
There’s only one server example that uses prompts (fetch), and the one prompt it provides returns the same output as the tool call, except wrapped in a PromptMessage. EDIT: lols like there are some capabilities classes in the mcp, maybe these will evolve.
https://modelcontextprotocol.io/docs/concepts/prompts
https://spec.modelcontextprotocol.io/specification/server/pr...
… but TLDR, if you think of them a bit like slash commands, I think that's a pretty good intuition for what they are and how you might use them.
One bit of constructive feedback: the TypeScript API isn't using the TypeScript type system to its fullest. For example, for tool providers, you could infer the type of a tool request handler's params from the json schema of the corresponding tool's input schema.
I guess that would be assuming that the model is doing constrained sampling correctly, such that it would never generate JSON that does not match the schema, which you might not want to bake into the reference server impl. It'd mean changes to the API too, since you'd need to connect the tool declaration and the request handler for that tool in order to connect their types.
Could I convince you to submit a PR? We'd love to include community contributions!
I guess I can do this for my local file system now?
I also wonder if I build an LLM powered app, and currently simply to RAG and then inject the retrieved data into my prompts, should this replace it? Can I integrate this in a useful way even?
The use case of on your machine with your specific data, seems very narrow to me right now, considering how many different context sources and use cases there are.
However, it's not quite a complete story yet. Remote connections introduce a lot more questions and complexity—related to deployment, auth, security, etc. We'll be working through these in the coming weeks, and would love any and all input!
It would probably be helpful for many of your readers if you had a focused document that addressed specifically that motivating question, together with illustrated examples. What does MCP provide, and what does it intend to solve, that a tool calling interface or RPC protocol can't?