We built a broker for the keys/secrets. We have a fork of nushell called seksh, which takes stand-ins for the actual auth, but which only reifies them inside the AST of the shell. This makes the keys inaccessible for the agent. In the end, the agent won't even have their Anthropic/OpenAI keys!
The broker also acts as a proxy, and injects secrets or even does asymmetric key signing on behalf of the proxied agent.
My agents are already running on our fork of OpenClaw, doing the work. They deprecated their Doppler ENV vars, and all their work is through the broker!
All that said, we might just take a few ideas from IronClaw as well.
I'm getting tired of these vibe-designed security things. I skimmed the "design". What is sandboxed from what? What is the threat model? What does it protect against, if anything? What does it fail to protect against? How does data get into a sandbox? How does it get out?
It kind of sounds like the LLM built a large system that doesn't necessarily achieve any actual value.
I mean it is described somewhat succinctly no? Potentially untrusted tools are isolated from the rest of the system - there were recently some cases of skills for openclaw being used as vectors for malware. This minimizes the adverse effect of potential malicious skills. Also protects from your agent to leaking your secrets left and right - because it has no access to them. Secrets are only supplied when payloads are leaving the host - i.e. the AI never sees your keys.
looking at the feature parity page, i realized how big openclaw ecosystem has become. It's completely crazy for such a young project to be able to interface with so many subsystems so fast.
At this rate, it's going to be simply impossible to catchup in just a few months.
Interesting approach. It requires a Near AI account. Supposedly that's a more private way to do inference, but at the same time they do offer Claude Opus 4.6 (among others), so I wonder what privacy guarantees they can actually offer and whether it depends on Anthropic?
They do verifiable inference on TEEs for the open source models. The anthropic ones I think they basically proxy for you (also via trusted TEE) so that it cant be tied to you. VPN for LLM inference so to speak.
I think the guys who are developing this (Illia Polosoukhin of "Attention is all you need") and others knows enough to leverage their skills with AI vs. producing slop
SEKS — Secure Environment for Key Services
We built a broker for the keys/secrets. We have a fork of nushell called seksh, which takes stand-ins for the actual auth, but which only reifies them inside the AST of the shell. This makes the keys inaccessible for the agent. In the end, the agent won't even have their Anthropic/OpenAI keys!
The broker also acts as a proxy, and injects secrets or even does asymmetric key signing on behalf of the proxied agent.
My agents are already running on our fork of OpenClaw, doing the work. They deprecated their Doppler ENV vars, and all their work is through the broker!
All that said, we might just take a few ideas from IronClaw as well.
I put up a Show HN, but no one noticed: https://news.ycombinator.com/item?id=47005607
Website is here: https://seksbot.com/