Especially these days, you could vibe-code something an order of magnitude better within a day or two and not be locked into a single author's rat's nest of code.
By the way, this is not specific to MCP, could have happened to any package.
The first is a preventive maintenance and calibration tracker (https://pmcal.net) that was born out of my day job as an engineer in small business manufacturing.
The second is an AI engine for pulling structured data out of incoming email (either via IMAP on your email server or via SES). If you think of the engine that powers TripIt, they had to write about 10,000 different ingestors for each airline and hotel and travel booking site. With a structured output AI, the need to write specific ingestors goes away.
Service discovery is handled via the port-forwarding model. A node can advertise a named endpoint (e.g. an Ollama instance), and another node can bind a local listener to that key. The mesh routes traffic end-to-end encrypted, so from the client’s perspective it behaves like a local port even though the service is remote.
For distributed inference, the main constraints are latency and hop count - extra hops add delay, which is fine for background work but relevant for interactive use. Everything runs in userspace, and outbound connections plus QUIC make it usable behind typical residential NATs.