Just an idea…
Why limit the lifetime on 30 mins ?
It's really useful to just turn a computer on, use a disk, and then plop its url in the browser.
I currently do one computer per project. I don't even put them in git anymore. I have an MDM server running to manage my kids' phones, a "help me reply to all the people" computer that reads everything I'm supposed to read, a dumb game I play with my son, a family todo list no one uses but me, etc, etc.
Immediate computers have made side projects a lot more fun again. And the nice thing is, they cost nothing when I forget about them.
Yes.
They won't horizontally scale. They're pretty good for hosting my side projects! Not good for, eg, hosting the API that orchestrates Sprites.
That's so confusing to me I had to read it five times. Are you saying you lose the metadata, or that the underlying data is actually mangled or gone, or merely that you lose the metadata?
One of the greatest features of something like this to me would be the ability to durable even beyond JuiceFS access to my data in a bad situation. Even if JuiceFS totally messes up, my data is still in S3 (and with versioning etc even if juicefs mangles or deletes my data, still). So odd to design this kind of software and lose this property.
Tigris has a one-to-one FUSE that does what you want: https://github.com/tigrisdata/tigrisfs
Tried with xterm, tilix and ghostty, all of which support the title setter escape sequence locally. For some reason, these get messed up (smells like edge case with escaping) and the result looks like this:
$ sprite c ": history-search-backward'
\]\u@\h\[\]:\[\]\w\[\]\$ ' ": history-search-forward'sprite@sprite:~$
"sprite@sprite:~$
I'm guessing y'all use Zsh because that works flawlessly :)* poor locking support (this sounds like it works better)
* it's slow
* no manual fence support; a bad but common way of distributing workloads is e.g. to compile a test on one machine (on an NFS mount), and then use SLURM or SGE to run the test on other machines. You use NFS to let the other machines access the data... and this works... except that you either have to disable write caches or have horrible hacks to make the output of the first machine visible to the others. What you really want is a manual fence: "make all changes to this directory visible on the server"
* The bloody .nfs000000 files. I think this might be fixed by NFSv4 but it seems like nobody actually uses that. (Not helped by the fact that CentOS 7 is considered "modern" to EDA people.)
The meta store is a bottleneck too. For a shared mount, you've got a bunch of clients sharing a metadata store that lives in the cloud somewhere. They do a lot of aggressive metadata caching. It's still surprisingly slow at times.
Everything I want to pay attention to gets a token, the server goes and looks for stuff in the api, and seeds local sqlites. If possible, it listens for webhooks to stay fresh.
Mostly the interface is Claude code. I have a web view that gives me some idea of volume, and then I just chat at Claude code to have it see what's going on. It does this by querying and cross referencing sqlite dbs.
I will have claude code send/post a response for me, but I still write them like a meatsack.
It's effectively: long lived HTTP server, sqlite, and then Claude skills for scripts that help it consistently do things based on my awful typing.