One thing: piping the install through curl | sh makes some people nervous. Might be worth adding a homebrew tap or at least a checksum for the binary.
One thing: piping the install through curl | sh makes some people nervous. Might be worth adding a homebrew tap or at least a checksum for the binary.
WebMCP flips that. The website exposes the tools and the browser decides what to call. The security model gets a lot harder when you're trusting random sites to define their own tool interfaces honestly. A malicious site could expose tools that look helpful but exfiltrate context from the agent's session.
Curious how they plan to sandbox this. The local MCP model works because trust is explicit. Not sure how that translates to the open web.
What actually helps is a good commit message explaining the intent. If an AI wrote the code, the interesting part isn't the transcript, it's why you asked for it and what constraints you gave it. A one-paragraph description of the goal and approach is worth more than a 200-message session log.
I think the real question isn't about storing sessions, it's about whether we're writing worse commit messages because we assume the AI context is "somewhere."
[1] If anything the threat gets somewhat reduced by the ability to point directly at a trusted domain and say "use this site and it's (presumably) trusted tools."
But yeah, if you're already letting agents browse freely, the incremental risk might be smaller than I'm imagining.