Readit News logoReadit News
nknutalapati commented on Show HN: Merkle Mountain Range audit log and execution tickets for AI agents   github.com/narendrakumarn... · Posted by u/nknutalapati
nknutalapati · 9 days ago
Hi HN — I've been experimenting with ways to make AI agent actions auditable and enforceable at runtime.

This project has two parts.

1. LICITRA-MMR — An append-only audit log using a Merkle Mountain Range instead of a simple hash chain. With a hash chain, verifying one event requires replaying the entire log. With an MMR, verification uses a logarithmic proof (~14 SHA-256 operations for ~10k events).

2. LICITRA-SENTRY — A small control layer between agents and tools.

Flow: agent → authorization service → signed execution ticket → proxy → tool

After approval, the system issues a signed ticket containing agent identity, tool name, hash of the exact request payload, and expiration. The proxy verifies the signature and recomputes the request hash before allowing execution.

This blocks: payload mutation after approval, replay of approvals across agents, and direct tool access without authorization.

Limitations I want to be upfront about: single-operator trust model, simple pattern-based content inspection, no distributed verification, not integrated with frameworks yet.

SENTRY repo: https://github.com/narendrakumarnutalapati/licitra-sentry

Happy to answer questions about design tradeoffs or where this breaks.

nknutalapati commented on Show HN: OnGarde – Runtime content security proxy for self-hosted AI agents    · Posted by u/antimaterial
nknutalapati · 11 days ago
Localhost-only dashboard with SQLite audit is a good default for self-hosted. Same question I keep asking these proxy projects: if the SQLite log is the evidence layer, what happens when someone disputes whether a block actually occurred? Is there a way to verify the log independently, or does it depend on trusting the file wasn't touched?
nknutalapati commented on Show HN: ClawCare – Security scanner and runtime guard for AI agent skills   github.com/natechensan/Cl... · Posted by u/chendev2
nknutalapati · 11 days ago
Runtime guarding at the tool execution layer is the right enforcement point. One thing I'd push further: the audit trail — is it append-only with integrity guarantees, or a standard log? If the guard blocks a command, can you prove that decision happened and wasn't altered later?
nknutalapati commented on Show HN: ClawShield – Open-source security proxy for AI agents (Go, eBPF)   github.com/SleuthCo/claws... · Posted by u/sleuthco
nknutalapati · 11 days ago
Solid proxy architecture. The deny-by-default YAML policy engine is the right call.

One question on the audit side: decisions are logged to SQLite — is that log tamper-evident? If an operator or admin modifies a row after the fact, is there a mechanism to detect it, or does verification depend on the SQLite file being unaltered?

Asking because in regulated environments, the first thing auditors challenge is whether the log itself can be trusted independently.

u/nknutalapati

KarmaCake day1March 3, 2026
About
Building cryptographic runtime integrity infrastructure for agentic AI.

Open source: github.com/narendrakumarnutalapati

View Original