Readit News logoReadit News
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
stcredzero · a month ago
We have a different security model.

SEKS — Secure Environment for Key Services

We built a broker for the keys/secrets. We have a fork of nushell called seksh, which takes stand-ins for the actual auth, but which only reifies them inside the AST of the shell. This makes the keys inaccessible for the agent. In the end, the agent won't even have their Anthropic/OpenAI keys!

The broker also acts as a proxy, and injects secrets or even does asymmetric key signing on behalf of the proxied agent.

My agents are already running on our fork of OpenClaw, doing the work. They deprecated their Doppler ENV vars, and all their work is through the broker!

All that said, we might just take a few ideas from IronClaw as well.

I put up a Show HN, but no one noticed: https://news.ycombinator.com/item?id=47005607

Website is here: https://seksbot.com/

dawg91 · a month ago
Your eastern european users will have some interesting results when googling for this
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
ramoz · a month ago
Can you link to the verifiable inference method?
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
amluto · a month ago
I'm getting tired of these vibe-designed security things. I skimmed the "design". What is sandboxed from what? What is the threat model? What does it protect against, if anything? What does it fail to protect against? How does data get into a sandbox? How does it get out?

It kind of sounds like the LLM built a large system that doesn't necessarily achieve any actual value.

dawg91 · a month ago
I mean it is described somewhat succinctly no? Potentially untrusted tools are isolated from the rest of the system - there were recently some cases of skills for openclaw being used as vectors for malware. This minimizes the adverse effect of potential malicious skills. Also protects from your agent to leaking your secrets left and right - because it has no access to them. Secrets are only supplied when payloads are leaving the host - i.e. the AI never sees your keys.
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
llmslave · a month ago
the power of openclaw is theres no sand boxing
dawg91 · a month ago
Or you design the sandbox so smartly that is seamless...
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
jgarzik · a month ago
Does it isolate keys away from bots?
dawg91 · a month ago
Yes exactly, keys are only injected at host boundary
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
bsaul · a month ago
looking at the feature parity page, i realized how big openclaw ecosystem has become. It's completely crazy for such a young project to be able to interface with so many subsystems so fast.

At this rate, it's going to be simply impossible to catchup in just a few months.

dawg91 · a month ago
Idk this seems to be gaining momentum and with devs being able to leverage their skillset via vibe coding anything seems possible really.
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
skybrian · a month ago
Interesting approach. It requires a Near AI account. Supposedly that's a more private way to do inference, but at the same time they do offer Claude Opus 4.6 (among others), so I wonder what privacy guarantees they can actually offer and whether it depends on Anthropic?
dawg91 · a month ago
They do verifiable inference on TEEs for the open source models. The anthropic ones I think they basically proxy for you (also via trusted TEE) so that it cant be tied to you. VPN for LLM inference so to speak.
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
whalesalad · a month ago
dawg91 · a month ago
I think the guys who are developing this (Illia Polosoukhin of "Attention is all you need") and others knows enough to leverage their skills with AI vs. producing slop
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
dawg91 · a month ago
Fun fact: it's being developed by one of the authors of "Attention is all you need"
dawg91 commented on IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes   github.com/nearai/ironcla... · Posted by u/dawg91
friendofmine · a month ago
Huh what's the benefit
dawg91 · a month ago
It's a hardened, security-first implementation. WASM runtime specifically is for isolating tool sandboxes

u/dawg91

KarmaCake day82February 13, 2026View Original