This is great. As someone who has made a few MCP servers in the last few months, MAN this spec moves fast; well ahead of Anthropic's internal support for it, and well ahead of documentation for implementation. It's like the Javascript community suddenly got automatic code creation agents, and went to town..
That said, the original spec needed some rapid iteration. With https support finally in relatively good shape, I hope we'll be able to take a year to let the API dust settle. Spec updates every three months are really tough, especially when not versioned, thoroughly documented, or archived properly.
One weird thing I found a few weeks ago, when I added my remote MCP to Claude's integration tab on the website, I was getting OAuth errors.
Turns out they are requiring a special "claudeai" scope. Once I added that to my server, I was able to use it remotely in claude desktop!
I couldn't find any docs or reasons online for them requesting this scope.
Also, I have been using remote mcps in claude code for weeks with the awesome mcp-remote proxy tool. It's nice to not need that any longer!
Then as I'm writing a book currently on MCP Servers with OAuth, Elicitations come out! I'm rushing to update this book and be the best source for every part of the latest spec, as I can already see lots of gaps in documentation on all these things.
Huge shout out to VS Code for being the best MCP Client, they have support for Elicitations in Insiders already and it works great from my testing.
Yeah, security really is an afterthought with most of these tools, but man the community is moving insanely fast — probably because most of these people are using these automation tools to develop their MCP servers in the first place.
It’s interesting to see other tools struggling to keep up. ChatGPT supposedly will get proper MCP client support “any day now”, but I don’t see codex supporting it any time soon.
Aider is very much struggling to adapt as well, as their whole workflow of editing and navigating files is easily replaced by MCP servers (probably better as well, as it provides much effective ways of reducing noise vs signal), so it’ll be interesting to see how tools adapt.
I’d love for Claude Code (or any tool for that matter) to fully embrace the agentic way of coding, e.g. have multiple agents specialize in different topics and some “main” agent directing them all. Those workflows seem to be working really well.
The real security issue is around the use of ‘YOLO mode’ where you just let the agent invoke tools in a completely unattended manner. It’s not much different than slapping sudo in front of every shell command or running as root.
People are going to continue doing that because these agentic tasks can take some time to run and checking in to approve a command so often becomes an annoyance.
I can’t see a way around that except to have some kind of sandboxing or a concept of untrusted or tainted input rather than treating all tokens as the same. Maybe a way of detecting if the response of a tool is within a threshold of acceptability within the definition of the MCP (which is easier with structured output), which is used to force a manual confirmation or straight up rejection if it’s deemed to be unusual or unsafe.
The aider slowdown is a real bummer. I’d love to have Claude code UI with the model choice aider gets me, but I’m not willing to give up tool integration.
> Javascript community suddenly got automatic code creation agents, and went to town.
I've been working on an MCP server[0] that let's LLMs safely and securely generate and execute JavaScript in a sandbox including using `fetch` to make API calls. It includes a built in secrets manager to prevent exposing secrets to the LLM.
I think this unlocks a lot of use cases that require code execution without compromising security. Biggest one is that you can now ask the LLM to make API calls securely because the JS is run in a C# interpreter with constraints for memory, time, and statement limits with hidden secrets (e.g. API keys).
The implementation is open source with sample client code in JS using Vercel AI SDK with a demo UI as well.
The crazy thing about things moving fast is that people bought Cursor for hundreds of millions when it is already outdated by Claude Code. Very foolish by the purchasers but very smart for the founders
I think this is why we're seeing founders selling so quickly with these startups. You could wait some weeks or months to sell higher, but seems chances are higher that whatever you've built is outdated by then so why risk it?
That's great! It would be even better if one of the features included in the table was whether given MCP supports OAuth Dynamic Client Registration, which optional in the MCP standard.
The MCP server technically doesn't support DCR. The authorization server for the MCP server does, which is a minor distinction.
Have you seen significant need for this? I've been trying to find data on things like "how many MCP clients are there really" - if it takes off where everything is going to be an MCP client && dynamically discovering what tools it needs beyond what it was originally set up for, sure.
This is great news; remote MCP support should be open and accessible.
For what it’s worth, I’ve been using WitsyAi: it’s fully free, open source, and serves as a universal desktop chat-client (with remote MCP calling). You just need to BYO API keys.
Remote MCPs are close to my heart; I’ve been building a “Heroku for remote MCP tools” over at Ninja[2] to make it easy for people to spin up and share MCP tools without the usual setup headaches.
Lately, I’ve also been helping folks get started with MCP development on Raspberry Pi. If you’re keen to dive in, feel free to reach out [3].
I like the fact this mcp-debug tool can present a REPL and act as a mcp server itself.
We've been developing our MCP servers by first testing the principle with the "meat robot" approach - we tell the LLM (sometimes just through the stock web interface, no coding agent) what we're able to provide and just give it what it asks for - when we find a "tool" that works well we automate it.
This feels like it's an easier way of trying that process - we're finding it's very important to build an MCP interface that works with what LLMs "want" to do. Without impedance matching it can be difficult to get the overall outcome you want (I suspect this is worse if there's not much training data out there that resembles your problem).
Does anybody know of a cross-platform LLM-frontend with sync that is also open-source? I am currently using the web version of LobeChat on macOS and Android, but it's quite slow and has some features missing.
I really wish that MCP servers were configurable in the iOS app, and that there were more configuration options for connecting MCP servers in claude.ai, such as adding custom HTTP headers.
easiest one to get going with is to add the Playwright MCP. As a python dev you might have used it to do test automation? Anyway, it gives your tool eg Cursor, Claude Code access to the browser and automation using playwright. Meaning it can literally load a page to confirm its own change just had the desired effect.
The blender one is also fun as a starting point, if you do any 3d modelling (or even if you don't).
Existing MCP support was for the stdio and SSE transport protocols which are either local MCP servers in docker/node/python or a remote server but still requiring a sort of workaround running locally in the form of the mcp-remote non package.
This now natively supports the latest streamable http transport protocol and the server is entirely remote, nothing is running on your local machine it's just an url (usually ending with /mcp, although not mandatory it's usually true and distinguishes from /sse servers).
It seems like all MCP functionality focuses on desktop-level apps like Claude Code. Are there any people building things like Shiny web apps that talk to MCP servers in the background to answer user questions? The only one I can find is acquaint WIP https://github.com/posit-dev/acquaint/pull/34
i think this is coming very shortly. it was just extremely difficult to ship a remote mcp server with proper auth until like 1 week ago. its still barely there, but i think its coming. Every single client side chatbot can suddenly be 100x more effective in its ability to actually do stuff in the app.
That said, the original spec needed some rapid iteration. With https support finally in relatively good shape, I hope we'll be able to take a year to let the API dust settle. Spec updates every three months are really tough, especially when not versioned, thoroughly documented, or archived properly.
One weird thing I found a few weeks ago, when I added my remote MCP to Claude's integration tab on the website, I was getting OAuth errors.
Turns out they are requiring a special "claudeai" scope. Once I added that to my server, I was able to use it remotely in claude desktop!
I couldn't find any docs or reasons online for them requesting this scope.
Also, I have been using remote mcps in claude code for weeks with the awesome mcp-remote proxy tool. It's nice to not need that any longer!
Then as I'm writing a book currently on MCP Servers with OAuth, Elicitations come out! I'm rushing to update this book and be the best source for every part of the latest spec, as I can already see lots of gaps in documentation on all these things.
Huge shout out to VS Code for being the best MCP Client, they have support for Elicitations in Insiders already and it works great from my testing.
For more curious and lazy people -- what are elicitations?
It’s interesting to see other tools struggling to keep up. ChatGPT supposedly will get proper MCP client support “any day now”, but I don’t see codex supporting it any time soon.
Aider is very much struggling to adapt as well, as their whole workflow of editing and navigating files is easily replaced by MCP servers (probably better as well, as it provides much effective ways of reducing noise vs signal), so it’ll be interesting to see how tools adapt.
I’d love for Claude Code (or any tool for that matter) to fully embrace the agentic way of coding, e.g. have multiple agents specialize in different topics and some “main” agent directing them all. Those workflows seem to be working really well.
People are going to continue doing that because these agentic tasks can take some time to run and checking in to approve a command so often becomes an annoyance.
I can’t see a way around that except to have some kind of sandboxing or a concept of untrusted or tainted input rather than treating all tokens as the same. Maybe a way of detecting if the response of a tool is within a threshold of acceptability within the definition of the MCP (which is easier with structured output), which is used to force a manual confirmation or straight up rejection if it’s deemed to be unusual or unsafe.
I think this unlocks a lot of use cases that require code execution without compromising security. Biggest one is that you can now ask the LLM to make API calls securely because the JS is run in a C# interpreter with constraints for memory, time, and statement limits with hidden secrets (e.g. API keys).
The implementation is open source with sample client code in JS using Vercel AI SDK with a demo UI as well.
[0] https://github.com/CharlieDigital/runjs
Couldnt AI help with that..?
Have you seen significant need for this? I've been trying to find data on things like "how many MCP clients are there really" - if it takes off where everything is going to be an MCP client && dynamically discovering what tools it needs beyond what it was originally set up for, sure.
For what it’s worth, I’ve been using WitsyAi: it’s fully free, open source, and serves as a universal desktop chat-client (with remote MCP calling). You just need to BYO API keys.
Remote MCPs are close to my heart; I’ve been building a “Heroku for remote MCP tools” over at Ninja[2] to make it easy for people to spin up and share MCP tools without the usual setup headaches.
Lately, I’ve also been helping folks get started with MCP development on Raspberry Pi. If you’re keen to dive in, feel free to reach out [3].
[1] https://witsyai.com
[2] https://ninja.ai
[3] https://calendly.com/schappi/30min
I like the fact this mcp-debug tool can present a REPL and act as a mcp server itself.
We've been developing our MCP servers by first testing the principle with the "meat robot" approach - we tell the LLM (sometimes just through the stock web interface, no coding agent) what we're able to provide and just give it what it asks for - when we find a "tool" that works well we automate it.
This feels like it's an easier way of trying that process - we're finding it's very important to build an MCP interface that works with what LLMs "want" to do. Without impedance matching it can be difficult to get the overall outcome you want (I suspect this is worse if there's not much training data out there that resembles your problem).
Now a lot of people use it to add context to their model. And also tool calls?
I am using continue.dev not Claude but I imagine this tech stack will be ported everywhere.
As a python Dev I don't quite yet understand though how and what service I should be running. Or be using. Tbh. Can anyone ELI5?
The blender one is also fun as a starting point, if you do any 3d modelling (or even if you don't).
They're also fun and easy to build.
Here's one I made - it wraps the vscode debugger: https://github.com/jasonjmcghee/claude-debugs-for-you
I've specifically tested it with continue.dev so it might serve as a useful example / template.
it allows publishing any text or claude artifact directly from the claude.
I have made it mostly for fun and as an experiment to try what is possible.
The CloudFlare and Linear MCP servers (at a minimum) seem to use the same approach, the mcp-remote npm package, e.g.
https://github.com/cloudflare/mcp-server-cloudflare/tree/mai...
But mcp-remote is clearly documented as experimental:> Note: this is a working proof-of-concept but should be considered experimental
https://www.npmjs.com/package/mcp-remote
I'm not sure how this could be considered anything other than professional negligence. I'm reminded of Kyle Kingsbury's CraftConf talk, Hope Springs Eternal: https://theburningmonk.com/2015/06/craftconf15-takeaways-fro...
Anthropic having created MCP should not be outdated though I agree ..
That MCP remote workaround is no longer necessary
This now natively supports the latest streamable http transport protocol and the server is entirely remote, nothing is running on your local machine it's just an url (usually ending with /mcp, although not mandatory it's usually true and distinguishes from /sse servers).
So I don't really understand what's new in this announcement.
Maybe what's actually new is streamable HTTP and OAuth?
https://github.com/anthropics/claude-code/blob/main/CHANGELO...