Sounds like it defeats the point.
Sounds like it defeats the point.
1. How are you defending against the case of one MCP poisoning your firewall LLM into incorrectly classifying other MCP tools?
2. How would you make sure the LLM shows the warning, as they are non-deterministic?
3. How clear do you expect MCP specs in order for your classification step to be trustworthy? To the best of my knowledge there is no spec that outlines how to "label" a tool for the 3 axes, so you've got another non-deterministic step here. Is "writing to disk" an external comm? It is if that directory is exposed to the web. How would you know?
1. We are assuming that the user has done their due diligence verifying the authenticity of the MCP server, in the same way they need to verify them when adding an MCP server to Claude code or VSCode. The gateway protects against an attacker exploiting already installed standard MCP servers, not against malicious servers.
2. That's a very good question - while it is indeed non-deterministic, we have not seen a single case of it not showing the message. Sometimes the message gets mangled but it seems like most current LLMs take the MCP output quite seriously since that is their source of truth about the real world. Also, while the message could in theory not be shown, the offending tool call will still be blocked so the worst case is that the user is simply confused.
3. Currently we follow the trifecta very literally, as in every tool is classified into a subset of {reads private data, writes on behalf of user, reads publicly modifiable data}. We have an LLM classify each tool at MCP server load time and we cache these results based on whatever data the MCP server sends us. If there are any issues with the classification, you can go into the gateway dashboard and modify it however you like. We are planning on making a improvements to the classification down the line but we think it is currently solid enough and we would like to get it into users' hands to get some UX feedback before we add extra functionality.
1. The "lethal trifecta" is also the "productive trifecta" - people want to be able to use LLMs to operate in this space since that's where much of the value is; using private / proprietary data to interact with (do I/O with) the real world.
2. I worry that there will soon be (if not already) a fourth leg to the stool - latent malicious training within the LLMs themselves. I know the AI labs are working on this, but trying to ferret out Manchurian Candidates embedded within LLMs may very well be the greatest security challenge of the next few decades.
Regarding the second point, that is a very interesting topic that we haven't thought about. It would seem that our approach would work for this usecase too, though. Currently, we're defending against the LLM being gullible but gullible and actively malicious are not properties that are too different. It's definitely a topic on our radar now, thanks for bringing it up!
But, it just seems to me that some of the 'vulnerabilities' are baked in from the beginning, e.g. control and data being in the same channel AFAIK isn't solvable. How is it possible to address that at all? Sure we can do input validation, sanitization, restrict access, etc. ,etc., and a host of other things but at the end of the day isn't it still non-zero chance that something is exploited and we're just playing whack-a-mole? Not to mention I doubt everyone will define things like "private data" and "untrusted" the same. uBlock tells me when a link is on one of it's lists but I still click go ahead anyways.
The way it works is the user registers / imports MCP (Model Context Protocol) servers they would like to use. All the tools of those servers are imported and then the firewall uses structured LLM calls to decide what types of action the tool performs among:
- read private data (e.g. read a local file or read your emails)
- perform an activity on your behalf (e.g. send an email or update a calendar invite)
- read public data (e.g. search the web)
The idea is that if all 3 types of tool calls are performed in a single context session, the LLM is vulnerable to jailbreak attacks (e.g. reads personal data -> reads poisoned public data with malicious instructions -> LLM gets tricked and posts personal data).
Once all the tools are classified the user can go inside and make any adjustments and then they are given the option to set up the gateway as an MCP server in their LLM client of choice. For each LLM session the gateway keeps track of all tool calls and, in particular, which action types are raised in the session. If a tool call is attempted that raises all action types for a session, it gets blocked and the user gets a notification, which sends them to the firewall UI where they can see the offending tool calls, and decide to either block the most recent one or add the triggering "set" to an allowlist.
Next steps are transitioning from the web UI for the product to a desktop app with a much cleaner and more streamlined UI. We're still working on improving the UX but the backend is solid and we would really like to get some more feedback for it.