I'm always glad to see people experimenting with different approaches to building and deploying apps. That said, the general idea of this Serverfree approach doesn't appeal to me. On any given day, I'll use 4 different devices. I need my data to synchronize seamlessly. I wouldn't use a program (in-browser or traditionally installed application) unless I can synchronize that data either by storing files in a Dropbox-like program or in the cloud. I don't want to have to remember which computer/browser combination I was working on.
For me the sweet spot are applications that store data on the local file system, but where the data is actually synchronized using something like NextCloud or Dropbox (optionally encrypted with something like Cryptomator) or iCloud, and the applications are built to support the synchronization by merging changes from different devices and detecting and resolving conflicts.
Meaning, the only cloud component should be “dumb” data storage, and it should remain entirely optional, only needed for use across multiple devices.
I often think we should have a separation of concern between storing/syncing data and application code. That syncing app files should be an OS level feature, with many different available data stores, and sync well across different platforms.
It should be OS-level, but in 2024 NO vendor (Apple, Microsoft, Google) is going to make a new open protocol for cross-platform syncing, w/o binding it tightly to their authentication platform/device security system/etc.
I agree, and the other downside is that with the server-free approach described in the article, there wouldn't be a backup of your data off your device.
The author does mention privacy concerns — hence the appeal of storing the data locally on your device.
I work on PowerSync https://www.powersync.com/ — using embedded SQLite for local-first/offline-first which syncs with Postgres in the background.
I think using an architecture like that where an encrypted version of the data is synced to Postgres, and decrypted for access on the client, would balance the trade-offs well.
There are private, distributed, synchronization protocols people are working on, such as Willow Protocol.
They are still working things out. But synchronization across devices controlled by a principal is doable with the primitives they have already come up with.
Exactly, initially we try to ditch data storage, lets say we do it by encrypting db and syncing to a cloud service, then expect it syncs on other devices when we're about to use the app. This would be just CRUD. Background processes would be a whole different problem to tackle with. I still can't envision a self-hosted decentralized backend on trivial devices.
> Veilid is a peer-to-peer network and application framework released by the Cult of the Dead Cow on August 11, 2023, at DEF CON 31. Described by its authors as "like Tor, but for apps", it is written in Rust, and runs on Linux, macOS, Windows, Android, iOS, and in-browser WASM. VeilidChat is a secure messaging application built on Veilid.
This can be done through a shared database file that syncs however you want- e.g. stored in a shared cloud, etc. The app itself then needs to have very robust conflict resolution code, but it can be done. I think some of the more security focused open source password manager apps already use an approach like this.
I was just made aware TursoDB has a version of their client [1] with the same underlying technology so it seems it might be possible to have the best of both worlds, interacting locally with the db in the browser while it's being synced to the remote instance (not 100% sure though).
Is there anything stopping you from also syncing an encrypted copy of your data to a central server to be synced down or your other devices? It doesn’t appear so.
A websocket between all your devices and a server (or even webrtc between devices) could achieve this in parallel.
I was going to mention WebRTC! It seems designed for video calling, but there are lots of cool use cases - I recently ran across https://github.com/dmotz/trystero , a dead simple WebRTC library for peer-to-peer multiplayer browser games.
There are a lot of exciting sync technologies being develop for this use case, I work on one of them at ElectricSQL (mentioned at the end by the OP), but we maintain a list of alternative here: https://electric-sql.com/docs/reference/alternatives
Looking at this with an open mind, I'm curious what benefits running SQLite in WebAssembly with a proxied web worker API layer gives compared to using localStorage or something similar.
* Using SQL has clear benefits for writing an application. You can use existing stable tools for performing migrations.
* Using SQLite in a filesystem offers many advantages w.r.t performance and reliability. Do these advantages translate over when using WebAssembly SQLite over OPFS?
* How does SQLite / OPFS performance compare to reading / writing to localstorage?
* From what I know about web workers, the browser thinks it is making http requests to communicate with subzero, while the web worker proxies these requests to a local subzero server. What is the overhead cost with doing this, and what benefits does this give over having the browser communicate directly with SQLite?
* I remember seeing a demo of using [SQLite over HTTP](https://hn.algolia.com/?q=sqlite+http) a while back. I wonder if that can be implemented with web workers as an even simpler interface between the web and SQLite and how that affects bundle size...
> Using SQLite in a filesystem offers many advantages w.r.t performance and reliability. Do these advantages translate over when using WebAssembly SQLite over OPFS?
I would say generally yes. SQLite is known for its performance, and with Wasm SQLite, performance is strongly related to how the file system operations are implemented. There have been some good advances in this area in the last couple of years. My co-founder wrote this blog post which talks about the current state of SQLite on the web and goes into performance optimizations:
* localStorage is small, volatile, OPFS is big/durable
* main thread <-> db vs main thread <-> worker <-> db:
- firstly, sqlite with OPFS has to run in a webworker
- even if it was possible to run in main thread, this approach allows for a code structure that is similar to a traditional architecture (frontend/backend split) and it's easy to route some request to the web worker while let over request fall through and reach backend server and not needing to worry about that in the "frontend code"
I am thinking the Willow Protocol would make a good base for local-first. There would be no privileged “backend”, but some peers can provide automated services.
I believe what is being proposed is a static site where user data is persisted locally using the WASM sqlite + OPFS. I guess it is also organized like a typical web app, but the app logic and database logic run locally.
I was expecting something different because it started with phrases like "no servers at all" and "entirely without any servers", but there's a regular web server serving static files.
I'm not a fan of the term "serverfree", though, since there is a web server. Also, the app and database servers from classic web apps continue to exist, albeit in a logical, local form. If this term somehow catches on for this style of app it will just cause endless confusion. I suppose it isn't a lot worse than some existing terms we've gotten used to (like "serverless"), but I'm always going to advocate to not repeat the mistakes of the past.
Speaking of WASM. Is there a way to run code in the browser that calls endpoints secured with CORS? I tried looking it up recently but no luck. I feel its a pretty big limiter when trying to call out to third parties directly and letting people bring their own API key.
I've been thinking about making a web GUI over gmailctl for easy editing but want to make it very easy to use without hosting and without people sending me their keys.
If I understand correctly, you're talking about making HTTP requests from code running in the browser to a third-party API.
Whether you're doing WASM or Javascript, you use fetch() and need to have your CORS ducks in a row. How exactly you call fetch() depends on your toolchain, but anything trying to be general-purpose will expose it somehow.
As I like to put it - Local-first is the real serverless. Your user's device is the real edge.
I think the future of the web needs to be that the server is optional, we need our data (albeit personal or our companies) to be on our own devices.
We are all carrying around these massively powerful devices in our pockets, let use that capability rather than offload everything to the cloud.
One of the things I find most exciting about local-first (and I'm very fortunate enough to be working on it full time), is the sync tech thats being developed for it. 2024 is I think going to be the year local-first goes mainstream.
Hopefully the sync tech being developed for it is solidly open, but it's good to have access to your data regardless.
The thing that always bothered me about that article is:
> Notably, the object server is open source and self-hostable, which reduces the risk of being locked in to a service that might one day disappear.
It appears that the object server is neither open source and nor self-hostable. The repository that they link is mostly empty. It has a rich version history of "releases" that only change the changelog file.
I assume the article was accurate when written, and have always wondered what happened. So I suspect mongo rewrote the git history to remove the code when they bought Realm. Was it ever open source? Did they intimidate people into taking down forks or did nobody bother?
I do see an edit to the README around that time adding that a license is required to run the self-hosted server. It is dated about two months before the linked article, but they may not have noticed or it may be back-dated:
There are many projects developing sync tech for local-first, I work on a fully open source one - ElectricSQL - and we are fortunate to have a couple of the CRDT co-inventors on the team. We maintain a list of any local-first sync project we know of here: https://electric-sql.com/docs/reference/alternatives
Clarification: I'm not the author of the linked post.
I read your post some time back and feel it's been an organizing force for developers in this space — great job and thanks for the work you put into it.
I often wonder about terminology. What was the reason you chose "local-first" over "offline-first" (or even "serverfree" as in this case)?
For me (not the author) "local first" makes it clear that "on device" is a first-class citizen, and the server is an afterthought. "Offline-first" or "server free" sounds much more limiting, like it will be able to limp when offline, but really wants to connect to a server eventually.
I also (personally) don't like "serverfree" because servers are good - they're not the problem! It's the "servers you don't and can't control" cloud dependencies that are the issue.
What we need is updates that only include security patches.... FEATURE CHANGES SHOULD BE OPTIONAL. Because all software tend to decrease in quality overtime.
While a desirable state for users, this could quickly balloon into a nest of support issues for the maintainers due to having many different versions to patch when a security issue or other significant bug becomes apparent, increasing the project's response time to anything important like that. You could try to mitigate this by maintaining only a couple of versions (perhaps preview, current, and LTS similar to Debian's sid/testing/stable) which would work for small projects, but for large ones or those that see fairly rapid development you runt he risk of stable gaining a reputation for being out of date, unless you bump it forward regularly which puts you back at square one with giving people feature updates they may not need.
If you're in the Microsoft world, try out the LTSC flavors (of Windows and Office). Its basically just that -- security updates and patches, no new features. Since switching, my environments have been much more stable, no Windows Updates that ruin my day -- just the stuff I need to stay secure, none of the new crap they're trying to push...
RHEL and many other companies offer that. It's called long term support, extended long term support and so on. And costs a fortune. Most people wouldn't want to buy it.
> One morning, as I was contemplating ways to procrastinate on doing marketing for SubZero without feeling guilty, an idea struck me. "I know, I'll engage > in some content marketing... but what do I need for that? Oh, a cool demo > project!" And just like that, I found a way to spend a month doing content > marketing writing code.
I absolutely don’t understand the point of this. Just reading the intro, it reads about technology for its own sake, just because you can. But what is the value, what are the downsides?
If you store it on your phone, then it's not showing up on your other devices. If you lose or break your phone, then your data is gone. There are very few applications for which that's acceptable - basically just your calculator app.
If you don't store it on your phone, then it's stored on some kind of server, somewhere. Do you own and control that server, or does someone else? How does the application consume and update the data?
The article clearly states that data is stored on the device. That's the exact use case described - where people do not want any of their data stored on a server they don't control.
If that's you, and if privacy is important enough to you to want a local-only app, I think you'll find a solution to back up your data.
I don't think it's unreasonable to continue producing apps that do not require an internet connection to function and be useful.
Fair point. To quibble though, I guess you are only thinking of iPhones and iPads. There are other mobile devices with built-in support for sdcards, and even USB drive, that easily allow you to save your data on them or use them to create backup of your data. These features allows the user to have better control of their data.
Edit: forgot some words
Meaning, the only cloud component should be “dumb” data storage, and it should remain entirely optional, only needed for use across multiple devices.
The author does mention privacy concerns — hence the appeal of storing the data locally on your device.
I work on PowerSync https://www.powersync.com/ — using embedded SQLite for local-first/offline-first which syncs with Postgres in the background.
I think using an architecture like that where an encrypted version of the data is synced to Postgres, and decrypted for access on the client, would balance the trade-offs well.
They are still working things out. But synchronization across devices controlled by a principal is doable with the primitives they have already come up with.
Every single product or service I pay for, needs to work on Linux and on Android.
Like Spotify. I love that the music plays on Linux with the big speakers, but I am choosing the songs on the Android pocket device.
https://en.wikipedia.org/wiki/Veilid
> Veilid is a peer-to-peer network and application framework released by the Cult of the Dead Cow on August 11, 2023, at DEF CON 31. Described by its authors as "like Tor, but for apps", it is written in Rust, and runs on Linux, macOS, Windows, Android, iOS, and in-browser WASM. VeilidChat is a secure messaging application built on Veilid.
https://youtube.com/watch?v=Kb1lKscAMDQ
1. https://www.npmjs.com/package/@libsql/client-wasm
A websocket between all your devices and a server (or even webrtc between devices) could achieve this in parallel.
* Using SQL has clear benefits for writing an application. You can use existing stable tools for performing migrations.
* Using SQLite in a filesystem offers many advantages w.r.t performance and reliability. Do these advantages translate over when using WebAssembly SQLite over OPFS?
* How does SQLite / OPFS performance compare to reading / writing to localstorage?
* From what I know about web workers, the browser thinks it is making http requests to communicate with subzero, while the web worker proxies these requests to a local subzero server. What is the overhead cost with doing this, and what benefits does this give over having the browser communicate directly with SQLite?
* I remember seeing a demo of using [SQLite over HTTP](https://hn.algolia.com/?q=sqlite+http) a while back. I wonder if that can be implemented with web workers as an even simpler interface between the web and SQLite and how that affects bundle size...
I would say generally yes. SQLite is known for its performance, and with Wasm SQLite, performance is strongly related to how the file system operations are implemented. There have been some good advances in this area in the last couple of years. My co-founder wrote this blog post which talks about the current state of SQLite on the web and goes into performance optimizations:
https://www.powersync.com/blog/sqlite-persistence-on-the-web
* main thread <-> db vs main thread <-> worker <-> db:
https://willowprotocol.org/
I was expecting something different because it started with phrases like "no servers at all" and "entirely without any servers", but there's a regular web server serving static files.
I'm not a fan of the term "serverfree", though, since there is a web server. Also, the app and database servers from classic web apps continue to exist, albeit in a logical, local form. If this term somehow catches on for this style of app it will just cause endless confusion. I suppose it isn't a lot worse than some existing terms we've gotten used to (like "serverless"), but I'm always going to advocate to not repeat the mistakes of the past.
I've been thinking about making a web GUI over gmailctl for easy editing but want to make it very easy to use without hosting and without people sending me their keys.
Whether you're doing WASM or Javascript, you use fetch() and need to have your CORS ducks in a row. How exactly you call fetch() depends on your toolchain, but anything trying to be general-purpose will expose it somehow.
As I like to put it - Local-first is the real serverless. Your user's device is the real edge.
I think the future of the web needs to be that the server is optional, we need our data (albeit personal or our companies) to be on our own devices.
We are all carrying around these massively powerful devices in our pockets, let use that capability rather than offload everything to the cloud.
One of the things I find most exciting about local-first (and I'm very fortunate enough to be working on it full time), is the sync tech thats being developed for it. 2024 is I think going to be the year local-first goes mainstream.
The thing that always bothered me about that article is:
> Notably, the object server is open source and self-hostable, which reduces the risk of being locked in to a service that might one day disappear.
It appears that the object server is neither open source and nor self-hostable. The repository that they link is mostly empty. It has a rich version history of "releases" that only change the changelog file.
I assume the article was accurate when written, and have always wondered what happened. So I suspect mongo rewrote the git history to remove the code when they bought Realm. Was it ever open source? Did they intimidate people into taking down forks or did nobody bother?
I do see an edit to the README around that time adding that a license is required to run the self-hosted server. It is dated about two months before the linked article, but they may not have noticed or it may be back-dated:
https://github.com/realm/realm-object-server/commit/fc0b399d...
I read your post some time back and feel it's been an organizing force for developers in this space — great job and thanks for the work you put into it.
I often wonder about terminology. What was the reason you chose "local-first" over "offline-first" (or even "serverfree" as in this case)?
I also (personally) don't like "serverfree" because servers are good - they're not the problem! It's the "servers you don't and can't control" cloud dependencies that are the issue.
They are an incredible bunch, as are so many people in the local-first / CRDT space, many I have had the opportunity to meet and collaborate with.
Deleted Comment
That costs battery life. With more powerful chips more time can be spent sleeping consuming minimal energy.
isn't users all that matters in the long term?
I absolutely don’t understand the point of this. Just reading the intro, it reads about technology for its own sake, just because you can. But what is the value, what are the downsides?
If you store it on your phone, then it's not showing up on your other devices. If you lose or break your phone, then your data is gone. There are very few applications for which that's acceptable - basically just your calculator app.
If you don't store it on your phone, then it's stored on some kind of server, somewhere. Do you own and control that server, or does someone else? How does the application consume and update the data?
If that's you, and if privacy is important enough to you to want a local-only app, I think you'll find a solution to back up your data.
I don't think it's unreasonable to continue producing apps that do not require an internet connection to function and be useful.