Readit News logoReadit News
yuvrajangads commented on WebMCP is available for early preview   developer.chrome.com/blog... · Posted by u/andsoitis
MadnessASAP · 10 days ago
The threat model doesn't really change for agents that already have "web fetch" (or equivalent) enabled. The agent is free to communicate with untrusted websites[1]. As before, the firewall remains at what private information the agent is allowed to have.

[1] If anything the threat gets somewhat reduced by the ability to point directly at a trusted domain and say "use this site and it's (presumably) trusted tools."

yuvrajangads · 10 days ago
Fair point about web fetch already being a trust boundary. The difference I see is that web fetch returns data, but WebMCP tools can define actions. A tool called "add_to_cart" is a lot more dangerous than fetching a product page. The agent trusts the tool's name and description to decide whether to call it, and that metadata comes from the site.

But yeah, if you're already letting agents browse freely, the incremental risk might be smaller than I'm imagining.

yuvrajangads commented on Show HN: Commitdog – Git on steroids CLI (pure Go, ~3MB binary)   aysdog.com/commitdog... · Posted by u/anirbanfaith
yuvrajangads · 10 days ago
Nice, similar energy to lazygit but as a single binary. The AI commit message generation is useful but I'd want a way to set a template or convention (conventional commits, etc.) so it doesn't just freestyle every time.

One thing: piping the install through curl | sh makes some people nervous. Might be worth adding a homebrew tap or at least a checksum for the binary.

yuvrajangads commented on WebMCP is available for early preview   developer.chrome.com/blog... · Posted by u/andsoitis
yuvrajangads · 10 days ago
I've been using MCP with Claude Code for a while now (Google Maps, Swiggy, Figma servers) and the local tool-use model works well because I control both sides. I pick which servers to trust, I see every tool call, and I can deny anything sketchy.

WebMCP flips that. The website exposes the tools and the browser decides what to call. The security model gets a lot harder when you're trusting random sites to define their own tool interfaces honestly. A malicious site could expose tools that look helpful but exfiltrate context from the agent's session.

Curious how they plan to sandbox this. The local MCP model works because trust is explicit. Not sure how that translates to the open web.

yuvrajangads commented on If AI writes code, should the session be part of the commit?   github.com/mandel-macaque... · Posted by u/mandel_x
yuvrajangads · 10 days ago
The session itself is mostly noise. Half of it is the model going down wrong paths, backtracking, and trying again. Storing that alongside the commit is like saving your browser history next to your finished code.

What actually helps is a good commit message explaining the intent. If an AI wrote the code, the interesting part isn't the transcript, it's why you asked for it and what constraints you gave it. A one-paragraph description of the goal and approach is worth more than a 200-message session log.

I think the real question isn't about storing sessions, it's about whether we're writing worse commit messages because we assume the AI context is "somewhere."

u/yuvrajangads

KarmaCake day11February 5, 2026View Original