One suggestion for improvement: Add some more info to your website/GitHub about the need for a provider and which providers are compatible. It took me a bit to figure that out because there was no prominent info about it. Additionally, none of the demos showed a login or authentication part. To me, it seemed like the VMs just came out of nowhere. So at first, I thought "Cloudrouter" was a project/company that gave away free VMs/GPUs (e.g. free tier/trial thing). But that seemed too good to be true. Later, I noticed the e2b.app domain and then I also found the little note way down at the bottom of the site that says "Provider selection" and "Use E2B provider (default)". Then I got it. However, I should mention that I don't know much about this whole topic. I hadn't heard of E2B or Modal before. Other people might find it more clear.
For those that are wondering about this too, you will need to use a provider like https://e2b.dev/ or https://modal.com/ to use this skill, and you pay them based on usage time.
I much prefer independent, loosely coupled, highly cohesive, composeable, extensible tools. It's not a very "programmery" solution, but it makes it easier as a user to fix things, extend things, combine things, etc.
The Docker template you have bundles a ton of apps into one container. This is problematic as it creates a big support burden, build burden, and compatibility burden. Docker works better when you make individual containers of a single app, and run them separately, and connect them with tcp, sockets, or volumes. Then the user can swap them out, add new ones, remove unneded ones, etc, and they can use an official upstream project. Docker-in-docker with a custom docker network works pretty well, and the host is still accessible if needed.
As a nit-pick: your auth code has browser-handling logic. This is low cohesion, a sign of problems to come. And in your rsync code:
sshCmd := fmt.Sprintf("ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=%q", proxyCmd)
I was just commenting the other day on here about how nobody checks SSH host keys and how SSH is basically wide-open due to this. Just leaving this here to show people what I mean. (It's not an easy problem to solve, but ignoring security isn't great either)Re: Docker template. I understand the Docker critique. So, the primary use case is an agent uploading its working directory and spinning it up as a dev environment. The agent needs the project files, the dev server, and the browser all in one place. If these are separate containers, the agent has to reason about volume mounts, Docker networking, etc — potentially more confusion, higher likelihood that agents get something wrong. A single environment where cloudrouter start ./my-project just works is what I envisioned.
Re: SSH host keys. SSH never connects to a real host. It's tunneled through TLS WebSocket via ProxyCommand. Also the hostname is fake, we have per-session auth token on the WebSocket layer, and VMs are ephemeral with fresh keys every boot. So, SSH isn't wide-open. We don't expose the SSH port (port 10000); everything goes through our authenticated proxy.
I do with it Pulumi, bc you can write some python or typescript for your infrastructure. But there are many infrastructure as code tools to choose from.