> Using Zero is another option, it has many similarities to Electric, while also directly supporting mutations.
The core differentiator of Zero is actually query-driven sync. We apparently need to make this more clear.
You build your app out of queries. You don't have to decide or configure what to sync up front. You can sync as much, or as little as you want, just by deciding which queries to run.
If Zero does not have the data that it needs on the client, queries automatically fall back to the server. Then that data is synced, and available for next query.
This ends up being really useful for:
- Any reasonably sized app. You can't sync all data to client.
- Fast startup. Most apps have publicly visible views that they want to load fast.
- Permissions. Zero doesn't require you to express your permissions in some separate system, you just use queries.
So the experience of using Zero is actually much closer to a reactive db, something like Convex or RethinkDB ().
Except that it uses standard Postgres, and you also get the instant interactions of a sync engine.
I developed an open-source task management software based on CRDT with a local-first approach. The motivation was that I primarily manage personal tasks without needing collaboration features, and tools like Linear are overly complex for my use case.
This architecture offers several advantages:
1. Data is stored locally, resulting in extremely fast software response times
2. Supports convenient full database export and import
3. Server-side logic is lightweight, requiring minimal performance overhead and development complexity, with all business logic implemented on the client
4. Simplified feature development, requiring only local logic operations
There are also some limitations:
1. Only suitable for text data storage; object storage services are recommended for images and large files
2. Synchronization-related code requires extra caution in development, as bugs could have serious consequences
3. Implementing collaborative features with end-to-end encryption is relatively complex
The technical architecture is designed as follows:
1. Built on the Loro CRDT open-source library, allowing me to focus on business logic development
2. Data processing flow:
User operations trigger CRDT model updates, which export JSON state to update the UI. Simultaneously, data is written to the local database and synchronized with the server.
3. The local storage layer is abstracted through three unified interfaces (list, save, read), using platform-appropriate storage solutions: IndexedDB for browsers, file system for Electron desktop, and Capacitor Filesystem for iOS and Android.
4. Implemented end-to-end encryption and incremental synchronization. Before syncing, the system calculates differences based on server and client versions, encrypts data using AES before uploading. The server maintains a base version with its content and incremental patches between versions. When accumulated patches reach a certain size, the system uploads an encrypted full database as the new base version, keeping subsequent patches lightweight.
I'm all-in on SSR. The client shouldn't have any state other than the session token, current URL and DOM.
Networks and servers will only get faster. Speed of light is constant, but we aren't even using its full capabilities right now. Hollow core fiber promises upward of 30% reduction in latency for everyone using the internet. There are RF-based solutions that provide some of this promise today. Even ignoring a wild RTT of 500ms, a SSR page rendered in 16ms would feel relatively instantaneous next to any of the mainstream web properties online today if delivered on that connection.
I propose that there is little justification to take longer than a 60hz frame to render a client's HTML response on the server. A Zen5 core can serialize something like 30-40 megabytes of JSON in this timeframe. From the server's perspective, this is all just a really fancy UTF-8 string. You should be measuring this stuff in microseconds, not milliseconds. The transport delay being "high" is not a good excuse to get lazy with CPU time. Using SQLite is the easiest way I've found to get out of millisecond jail. Any hosted SQL provider is like a ball & chain when you want to get under 1ms.
There are even browser standards that can mitigate some of the navigation delay concerns:
this isn't an argument for SSR. In fact there's hardly a universal argument for SSR. You're thinking of a specific use-case where there's more compute capacity on the server, where logic can't be easily split, etc. There are plenty of examples that make the client-side rendering faster.
Rendering logic can be disproportionately complex relative to the data size. Moreover, client resources may actually be larger in aggregate than sever. If SSR would be the only reasonable game in we wouldn't have excitement around Web Assembly.
If you could simply drop in a library to any of your existing SSR apps that:
- is 50kb (gzipped)
- requires no further changes required from you (either now or in the future)
- enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation
would you do it?
The problem I see with SSR evangelism is that it assumes that compromising that one use case (offline/low bandwidth use of the app) is necessary to achieve developer happiness and a good UX. And in some cases (like this) it goes on to justify that compromise with promises of future network improvements.
The fact is, low bandwidth requirement will always be a valuable feature, no matter the context. It's especially valuable to people in third-world countries, in remote locations, or being served by Comcast (note I'm being a little sarcastic with that last one).
> - enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation
> would you do it?
No, because the "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise wherein the specific implementation details are omitted. Everything is domain specific when it comes to sync-based latency hiding techniques. SSR is domain agnostic.
> low bandwidth requirements
If we are looking at this from a purely information theoretical perspective, the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response. If I am being provided internet via avian species in Africa, I would also prefer the entire webpage be delivered in one simple response body without any dependencies. It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this because you can simply use multipart form submissions for all of the interactions. The browser already knows how to do this stuff without any help.
Would you try to write/work on a collaboratibe text document (ie Google Docs or Sheets?) by editing a paragraph/sentence that's server side rendered and hope nobody changes the paragraph mid-work because the developers insisted on SSR ?
These kinds of tools (Docs, Sheets, Figma, Linear,etc) work well because changes have little impact but conflict resolution is better avoided by users noticing that someone else is working on it and hopefully just get realtime updates.
Then again, hotel booking or similar has no need for something like that.
Then there's been middle-ground like an enterprise logistics app that had some badly YOLO'd syncing, it kinda needed some of it but there was no upfront planning and it took a time to retrofit a sane design since there was so much domain and system specifics things lurking with surprises.
Latency is additive, so all that copper coax that and mux/demux in between a sizeable chunk of Americans and the rest of the internet means you're looking at a minimum roundtrip latency of 30ms if server is in the same city. Most users are also on Wi-Fi which adds and additional mux/demux + rebroadcast step that adds even more. And most people do not have the latest CPU. Not to mention mobile users over LTE.
Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.
uh have you ever tried pinging a server in your same city? It's usually substantially <30ms. I'm currently staying at a really shitty hotel that has 5mbps wifi, not to mention I'm surrounded by other rooms, and I can still ping 8.8.8.8 in 20ms. From my home internet, which is /not/ fiber, it's 10ms.
ElectricSQL and TanStack DB are great, but I wonder why they focus so much on local first for the web over other platforms, as in, I see mobile being the primary local first use case since you may not always have internet. In contrast, typically if you're using a web browser to any capacity, you'll have internet.
Also the former technologies are local first in theory but without conflict resolution they can break down easily. This has been from my experience making mobile apps that need to be local first, which led me to using CRDTs for that use case.
Because building local first with web technologies is like infinity harder than building local first with native app toolkits.
Native app is installed and available offline by default. Website needs a bunch of weird shenanigans to use AppManifest or ServiceWorker which is more like a bunch of parts you can maybe use to build available offline.
Native apps can just… make files, read and write from files with whatever 30 year old C code, and the files will be there on your storage. Web you have to fuck around with IndexedDB (total pain in the ass), localStorage (completely insufficient for any serious scale, will drop concurrent writes), or OriginPrivateFileSystem. User needs to visit regularly (at least once a month?) or Apple will erase all the local browser state. You can use JavaScript or hit C code with a wrench until it builds for WASM w/ Emscripten, and even then struggle to make sync C deal with waiting on async web APIs.
Apple has offered CoreData + CloudKit since 2015, a completed first party solution for local apps that sync, no backend required. I’m not a Google enthusiast, maybe Firebase is their equivalent? Idk.
Well .... that's all true, until you want to deploy. Historically deploying desktop apps has been a pain in the ass. App stores barely help. That's why devs put up with the web's problems.
Ad: unless you use Conveyor, my company's product, which makes it as easy as shipping a web app (nearly):
You are expected to bring your own runtime. It can ship anything but has integrated support for Electron and JVM apps, Flutter works too although Flutter Desktop is a bit weak.
and if you didn't like or cared to learn CoreData? just jam a sqlite db in your application and read from it, it's just C. This was already working before Angular or even Backbone
Sure they're harder to build but my question is mainly why build them (for web in particular)? I don't see the benefits for a web app where I'll usually be online versus a mobile app where I may frequently have internet shortages when out and about.
I don't think Apple's solution syncs seamlessly, I needed to use CRDTs for that, that's still an unsolved problem for both mobile and web.
> Because building local first with web technologies is like infinity harder than building local first with native app toolkits.
You just have to write one for every client, no big deal, right? Just 2-5 (depending on if you have mobile clients and if you decide to support Linux too) times the effort.
You even say it yourself, you'll have to use Apple's sync and data solutions, and figure it out for Windows, Android and maybe Linux. Should be easy to sync data between the different storage and sync options...
Oh, and you have to figure out how to build, sign and update for all OSes too. Pay the Apple fee, the Microsoft whatever nonsense to not get your software flagged as malware on installation. It's around a million times easier to develop and deploy a web application, and that's why most developers and companies are defaulting to that, unless they have very good reasons.
I think this is a fascinating and deep question, that I ponder often.
I don't feel like I know all the answers, but as the creator of Replicache and Zero here is why I feel a pull to the web and not mobile:
- All the normal reasons the web is great – short feedback loop, no gatekeepers, etc. I just prefer to build for the web.
- The web is where desktop/productivity software happens. I want productivity software that is instant. The web has many, many advantages and is the undisputed home of desktop software now, but ever since we went to the web the interaction performance has tanked. The reason is because all software (including desktop) is client/server now and the latency shows up all over the place. I want to fix that, in particular.
- These systems require deep smarts on the client – they are essentially distributed databases, and they need to run that engine client-side. So there is the question of what language to implement in. You would think that C++/Rust -> WASM would be obvious but there are really significant downsides that pull people to doing more and more in the client's native language. So you often feel like you need to choose one of those native languages to start with. JS has the most reach. It's most at home on the desktop web, but it also reaches mobile via RN.
- For the same reason as prev, the complex productivity apps that are often targeted by sync engines are often themselves written using mainly web tech on mobile. Because they are complex client-side systems and they need to pick a single impl language.
Mobile has really strong offline-primitives compared to the web.
But the web is primarily where a lot of productivity and collaboration happens; it’s also a more adversarial environment. Syncing state between tabs; dealing with storage eviction. That’s why local first is mostly web based.
I think the current crop of sync engines greatly benefit from being web-first because they are still young and getting lots of updates. And mobile updates are a huge pain compared to webapp updates.
The PWA capabilities of webapps are pretty OK at this point. You can even drive notifications from the iOS pinned PWA apps, so personally, I get all I need from web apps pretending to be mobile apps.
In this case it's not about being able to use the product at all, but the joy from using an incredibly fast and responsive product, which therefore you want to use local-first.
Not a lot of mention for the collaboration aspect that local first / sync engines enabled. I've been building a project using Zero that is meant to replace a Google Sheet a friend of mine uses for his business. He routinely gets on a Google Meet with a client, they both open the Sheet and then go through the data.
Before the emergence of tools like Zero I wouldn't have ever considered attempting to recreate the experience of a Google Sheet in a web app. I've previously built many live updating UIs using web sockets but managing that incoming data and applying it to the right area in the UI is not trivial. Take that and multiply it by 1000 cells in a Sheet (which is the wrong approach anyway, but it's what I knew how to build) and I can only imagine the mess of code.
Now with Zero, I write a query to select the data and a mutator to change the data and everything syncs to anyone viewing the page. It is a pleasure to work with and I enjoy building the application rather than sweating dealing with applying incoming hyper specific data changes.
I've been very impressed by Jazz -- it enables great DX (you're mostly writing sync, imperative code) and great UX (everything feels instant, you can work offline, etc).
Main problems I have are related to distribution and longevity -- as the article mentions, it only grows in data (which is not a big deal if most clients don't have to see that), and another thing I think is more important is that it's lacking good solutions for public indexes that change very often (you can in theory have a public readable list of ids). However, I recently spoke with Anselm, who said these things have solutions in the works.
All in all local-first benefits often come with a lot of costs that are not critical to most use cases (such as the need for much more state). But if Jazz figures out the main weaknesses it has compared to traditional central server solutions, it's basically a very good replacement for something like Firebase's Firestore in just about every regard.
Yeah, Jazz is amazing. The DX is unmatched. My issue when I used it was, they mainly supported passkey-based encryption, which was poorly implemented on windows. That made it kind of a non-starter for me, although I'm sure they'll support traditional auth methods soon. But I love that it's end-to-end encrypted and it's super fun to use.
Local-First & Sync-Engines are the future. Here's a great filterable datatable overview of the local-first framework landscape:
https://www.localfirst.fm/landscape
My favorite so far is Triplit.dev (which can also be combined with TanStack DB); 2 more I like to explore are PowerSync and NextGraph. Also, the recent LocalFirst Conf has some great videos, currently watching the NextGraph one (https://www.youtube.com/watch?v=gaadDmZWIzE).
How is the database migration support for these tools?
Needing to support clients that don’t phone home for an extended period and therefore need to be rolled forward from a really old schema state seems like a major hassle, but maybe I’m missing something. Trying to troubleshoot one-off front end bugs for a single product user can be real a pain, I’d hate to see what it’s like when you have to factor in the state of their schema as well
I can't speak to the other tools, but we built PowerSync using a schemaless protocol under the hood, specifically for this reason. Most of the time you don't need to implement migrations at all. For example adding a new column just works, as the data is already there when the schema is rolled forward.
Thank you for this, I'm going to have to check out Triplit. Have you tried InstantDB? It's the one I've been most interested in trying but haven't yet.
"Works like a charm" / "not listed" does not really do it justice, its much worse. All of the mentioned "solutions" will inevitably lose data on conflicts one way or another and i am not aware of anything from that school of thought, that has full control over conflict resolution as couchdb / pouchdb does. Apparently vibe coding crdt folks do not value data safety over some more developer ergonomics. It is an tradeoff to make for your own hobby projects if you are honest about it but i don't understand how this is just completely ignored in every single one of these posts.
The core differentiator of Zero is actually query-driven sync. We apparently need to make this more clear.
You build your app out of queries. You don't have to decide or configure what to sync up front. You can sync as much, or as little as you want, just by deciding which queries to run.
If Zero does not have the data that it needs on the client, queries automatically fall back to the server. Then that data is synced, and available for next query.
This ends up being really useful for:
- Any reasonably sized app. You can't sync all data to client.
- Fast startup. Most apps have publicly visible views that they want to load fast.
- Permissions. Zero doesn't require you to express your permissions in some separate system, you just use queries.
So the experience of using Zero is actually much closer to a reactive db, something like Convex or RethinkDB ().
Except that it uses standard Postgres, and you also get the instant interactions of a sync engine.
Deleted Comment
This architecture offers several advantages:
1. Data is stored locally, resulting in extremely fast software response times 2. Supports convenient full database export and import 3. Server-side logic is lightweight, requiring minimal performance overhead and development complexity, with all business logic implemented on the client 4. Simplified feature development, requiring only local logic operations
There are also some limitations:
1. Only suitable for text data storage; object storage services are recommended for images and large files 2. Synchronization-related code requires extra caution in development, as bugs could have serious consequences 3. Implementing collaborative features with end-to-end encryption is relatively complex
The technical architecture is designed as follows:
1. Built on the Loro CRDT open-source library, allowing me to focus on business logic development
2. Data processing flow: User operations trigger CRDT model updates, which export JSON state to update the UI. Simultaneously, data is written to the local database and synchronized with the server.
3. The local storage layer is abstracted through three unified interfaces (list, save, read), using platform-appropriate storage solutions: IndexedDB for browsers, file system for Electron desktop, and Capacitor Filesystem for iOS and Android.
4. Implemented end-to-end encryption and incremental synchronization. Before syncing, the system calculates differences based on server and client versions, encrypts data using AES before uploading. The server maintains a base version with its content and incremental patches between versions. When accumulated patches reach a certain size, the system uploads an encrypted full database as the new base version, keeping subsequent patches lightweight.
If you're interested in this project, please visit https://github.com/hamsterbase/tasks
Networks and servers will only get faster. Speed of light is constant, but we aren't even using its full capabilities right now. Hollow core fiber promises upward of 30% reduction in latency for everyone using the internet. There are RF-based solutions that provide some of this promise today. Even ignoring a wild RTT of 500ms, a SSR page rendered in 16ms would feel relatively instantaneous next to any of the mainstream web properties online today if delivered on that connection.
I propose that there is little justification to take longer than a 60hz frame to render a client's HTML response on the server. A Zen5 core can serialize something like 30-40 megabytes of JSON in this timeframe. From the server's perspective, this is all just a really fancy UTF-8 string. You should be measuring this stuff in microseconds, not milliseconds. The transport delay being "high" is not a good excuse to get lazy with CPU time. Using SQLite is the easiest way I've found to get out of millisecond jail. Any hosted SQL provider is like a ball & chain when you want to get under 1ms.
There are even browser standards that can mitigate some of the navigation delay concerns:
https://developer.mozilla.org/en-US/docs/Web/API/Speculation...
this isn't an argument for SSR. In fact there's hardly a universal argument for SSR. You're thinking of a specific use-case where there's more compute capacity on the server, where logic can't be easily split, etc. There are plenty of examples that make the client-side rendering faster.
Rendering logic can be disproportionately complex relative to the data size. Moreover, client resources may actually be larger in aggregate than sever. If SSR would be the only reasonable game in we wouldn't have excitement around Web Assembly.
Also take a look at the local-computation post https://news.ycombinator.com/item?id=44833834
The reality is that you can't know which one is better and you should be able to decide at request time.
- is 50kb (gzipped)
- requires no further changes required from you (either now or in the future)
- enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation
would you do it?
The problem I see with SSR evangelism is that it assumes that compromising that one use case (offline/low bandwidth use of the app) is necessary to achieve developer happiness and a good UX. And in some cases (like this) it goes on to justify that compromise with promises of future network improvements.
The fact is, low bandwidth requirement will always be a valuable feature, no matter the context. It's especially valuable to people in third-world countries, in remote locations, or being served by Comcast (note I'm being a little sarcastic with that last one).
> would you do it?
No, because the "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise wherein the specific implementation details are omitted. Everything is domain specific when it comes to sync-based latency hiding techniques. SSR is domain agnostic.
> low bandwidth requirements
If we are looking at this from a purely information theoretical perspective, the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response. If I am being provided internet via avian species in Africa, I would also prefer the entire webpage be delivered in one simple response body without any dependencies. It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this because you can simply use multipart form submissions for all of the interactions. The browser already knows how to do this stuff without any help.
Would you try to write/work on a collaboratibe text document (ie Google Docs or Sheets?) by editing a paragraph/sentence that's server side rendered and hope nobody changes the paragraph mid-work because the developers insisted on SSR ?
These kinds of tools (Docs, Sheets, Figma, Linear,etc) work well because changes have little impact but conflict resolution is better avoided by users noticing that someone else is working on it and hopefully just get realtime updates.
Then again, hotel booking or similar has no need for something like that.
Then there's been middle-ground like an enterprise logistics app that had some badly YOLO'd syncing, it kinda needed some of it but there was no upfront planning and it took a time to retrofit a sane design since there was so much domain and system specifics things lurking with surprises.
Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.
After that, with competent engineering everything should be faster on the client, since it only needs state updates, not a complete re-render
If you don't have competent engineering, SSR isn't going to save you
Also the former technologies are local first in theory but without conflict resolution they can break down easily. This has been from my experience making mobile apps that need to be local first, which led me to using CRDTs for that use case.
Native app is installed and available offline by default. Website needs a bunch of weird shenanigans to use AppManifest or ServiceWorker which is more like a bunch of parts you can maybe use to build available offline.
Native apps can just… make files, read and write from files with whatever 30 year old C code, and the files will be there on your storage. Web you have to fuck around with IndexedDB (total pain in the ass), localStorage (completely insufficient for any serious scale, will drop concurrent writes), or OriginPrivateFileSystem. User needs to visit regularly (at least once a month?) or Apple will erase all the local browser state. You can use JavaScript or hit C code with a wrench until it builds for WASM w/ Emscripten, and even then struggle to make sync C deal with waiting on async web APIs.
Apple has offered CoreData + CloudKit since 2015, a completed first party solution for local apps that sync, no backend required. I’m not a Google enthusiast, maybe Firebase is their equivalent? Idk.
Ad: unless you use Conveyor, my company's product, which makes it as easy as shipping a web app (nearly):
https://hydraulic.dev/
You are expected to bring your own runtime. It can ship anything but has integrated support for Electron and JVM apps, Flutter works too although Flutter Desktop is a bit weak.
I don't think Apple's solution syncs seamlessly, I needed to use CRDTs for that, that's still an unsolved problem for both mobile and web.
You just have to write one for every client, no big deal, right? Just 2-5 (depending on if you have mobile clients and if you decide to support Linux too) times the effort.
You even say it yourself, you'll have to use Apple's sync and data solutions, and figure it out for Windows, Android and maybe Linux. Should be easy to sync data between the different storage and sync options...
Oh, and you have to figure out how to build, sign and update for all OSes too. Pay the Apple fee, the Microsoft whatever nonsense to not get your software flagged as malware on installation. It's around a million times easier to develop and deploy a web application, and that's why most developers and companies are defaulting to that, unless they have very good reasons.
I don't feel like I know all the answers, but as the creator of Replicache and Zero here is why I feel a pull to the web and not mobile:
- All the normal reasons the web is great – short feedback loop, no gatekeepers, etc. I just prefer to build for the web.
- The web is where desktop/productivity software happens. I want productivity software that is instant. The web has many, many advantages and is the undisputed home of desktop software now, but ever since we went to the web the interaction performance has tanked. The reason is because all software (including desktop) is client/server now and the latency shows up all over the place. I want to fix that, in particular.
- These systems require deep smarts on the client – they are essentially distributed databases, and they need to run that engine client-side. So there is the question of what language to implement in. You would think that C++/Rust -> WASM would be obvious but there are really significant downsides that pull people to doing more and more in the client's native language. So you often feel like you need to choose one of those native languages to start with. JS has the most reach. It's most at home on the desktop web, but it also reaches mobile via RN.
- For the same reason as prev, the complex productivity apps that are often targeted by sync engines are often themselves written using mainly web tech on mobile. Because they are complex client-side systems and they need to pick a single impl language.
But the web is primarily where a lot of productivity and collaboration happens; it’s also a more adversarial environment. Syncing state between tabs; dealing with storage eviction. That’s why local first is mostly web based.
The PWA capabilities of webapps are pretty OK at this point. You can even drive notifications from the iOS pinned PWA apps, so personally, I get all I need from web apps pretending to be mobile apps.
Local-first is actually the default in any native app
Before the emergence of tools like Zero I wouldn't have ever considered attempting to recreate the experience of a Google Sheet in a web app. I've previously built many live updating UIs using web sockets but managing that incoming data and applying it to the right area in the UI is not trivial. Take that and multiply it by 1000 cells in a Sheet (which is the wrong approach anyway, but it's what I knew how to build) and I can only imagine the mess of code.
Now with Zero, I write a query to select the data and a mutator to change the data and everything syncs to anyone viewing the page. It is a pleasure to work with and I enjoy building the application rather than sweating dealing with applying incoming hyper specific data changes.
Main problems I have are related to distribution and longevity -- as the article mentions, it only grows in data (which is not a big deal if most clients don't have to see that), and another thing I think is more important is that it's lacking good solutions for public indexes that change very often (you can in theory have a public readable list of ids). However, I recently spoke with Anselm, who said these things have solutions in the works.
All in all local-first benefits often come with a lot of costs that are not critical to most use cases (such as the need for much more state). But if Jazz figures out the main weaknesses it has compared to traditional central server solutions, it's basically a very good replacement for something like Firebase's Firestore in just about every regard.
My favorite so far is Triplit.dev (which can also be combined with TanStack DB); 2 more I like to explore are PowerSync and NextGraph. Also, the recent LocalFirst Conf has some great videos, currently watching the NextGraph one (https://www.youtube.com/watch?v=gaadDmZWIzE).
Needing to support clients that don’t phone home for an extended period and therefore need to be rolled forward from a really old schema state seems like a major hassle, but maybe I’m missing something. Trying to troubleshoot one-off front end bugs for a single product user can be real a pain, I’d hate to see what it’s like when you have to factor in the state of their schema as well