Also, we're currently in the midst of a new project called Cambria, exploring some of the consequences and data demands for a new Cambrian-era of software here: https://inkandswitch.github.io/cambria/
This Cambria project looks pretty interesting. Is it an internal project only? One of the blog posts gives an example of running it, but I couldn’t find any links to source nor binaries, so I am assuming it is currently internal only.
> ... cloud apps depend on the service continuing to be available: if the service is unavailable, you cannot use the software, and you can no longer access your data created with that software. This means you are betting that the creators of the software will continue supporting it for a long time — at least as long as you care about the data. Although there does not seem to be a great danger of Google shutting down Google Docs anytime soon, popular products (e.g. Google Reader) do sometimes get shutdown or lose data, so we know to be careful.
Plus, the value of the cloud app is not just your data, but the network effects. Like, if you've emailed links to a GDocs document, and 5 years later you decide to move to another service, those GDocs links will 404, regardless of whether you've transferred all of your data to the new service.
With local-first apps, the URL starts with you, not some some-sass-provider.com.
Many far smarter people have said this before, far more eloquently than I can, but in short:
Cloud Computing (and SASS even more so) is little more than just another attempt to recreate access/info monopolies, essentially the same profit proposition as existed with closed source software, while pretending to be one of the cool kids and use politically more acceptable (but in this context rather meaningless) terms like Open Source and Open Standards. It may be a different generation of companies, with slightly different cultures, but they are all equally predatory in nature as the old ones.
It's going to be a rude awakening, when some of the bigger service providers will eventually fall over (which they will). Of course, everyone will blame anything and everything but their own willful ignorance, when that happens.
>Cloud Computing (and SASS even more so) is little more than just another attempt to recreate access/info monopolies, [...] , everyone will blame anything and everything but their own willful ignorance,
For some reason, "cloud computing" has become a bogeyman and therefore corporations paying for it are clueless "sheeple".
To help prevent the phrase "cloud computing" from distorting our thinking, we have to remember that companies have been paying others for off-premise computing without calling it "cloud" or "SaaS" for decades before Amazon AWS, Salesforce, etc existed.
Examples...
In the 1960s, IBM's SABRE[1] airline reservations system was the "cloud" for companies like American Airlines, Delta, etc.
In the 1980s, many companies used to process payroll in-house accounting software and print paychecks on self-owned dot-matrix printers. But most companies eventually outsourced all that to specialized companies such as ADP[2].
Companies tried to manage employees' retirement benefits on in-house software but most outsource that to companies like Fidelity[3]. Likewise, even companies that self-fund their own healthcare benefits will still outsource the administration to a company like Cigna[4]. Don't install a bunch of "healthcare management software" on your own on-premise servers. Just use the "cloud/SaaS" computers that Cigna has.
Some companies (really old ones) used to print their own stock ownership certificates and mail them out to grandma. Now, virtually every company outsources that to another company. Most companies that have Employee Stock Purchase Plans outsource the administration of it to a company like Computershare[5].
The major difference with "cloud" terminology taking hold is that services like AWS is offering generic compute (EC2) and non-vertical industry solutions. Otherwise, the so-called "cloud" has been going on for decades. Amazon AWS made "cloud" really convenient by allocating off-premise resources via a web interface (dashboard or REST api) instead of calling a salesperson from IBM/ADP/Fidelity/Cigna/etc.
That doesn't mean it's always correct to buy into everything the cloud offers. Pick and choose the tradeoffs that make financial sense. Netflix got rid of their datacenters and moved 100% of the customer billing to the cloud. But Dropbox did the opposite and migrated from AWS to their own datacenter. They're both correct for their situations.
>, when some of the bigger service providers will eventually fall over (which they will).
The big established vendors like AWS, Azure, and GCP ... all have enough business that they will be around for decades. If anybody will exit, I'd guess it would be the smaller players like Oracle Cloud.
I don't think that's the full picture. There are lots of good reasons for small and medium-sized providers of b2c software to prefer SaaS and cloud delivery over on-premises that aren't anything to do with access/info monopolies.
For example as a small provider you'll just kill all your velocity if you try to deliver on-prem to a number of large enterprises. They will all have different upgrade policies, testing and approval policies, different approved hardware etc etc and pretty soon you will suffer a death from 1000 cuts and not be able to deliver anything.
For example, I had a client once say to me that we had to replace postgres in our stack with Oracle because that was their approved database. Even though we had to support the stack. I had a client delay us by 6 months because they decided to order super bleeding-edge network cards (which I repeatedly said we didn't need) then these cards turned out to take ages to deliver, when they arrived they didn't work with the "corporate approved" version of linux they insisted on using for another couple of months etc.
In a saas you don't have to deal with any of those things. You do the one-off (painful) third-party vendor approval process, go through all the infosec audits etc but after that you own the stack and can run it the way you want.
If you want to change the hardware layout and it's on prem you have to go cap-in-hand to the client and get them to stump up cash, wait months while physical boxes get allocated, racked up etc. If you're in-cloud, you make a change to terraform, check it in and it gets pushed out through your CI/CD pipeline.
If you want to roll changes to all your customers that's really easy as a SaaS/cloud offering whereas it's very hard if they're each on-prem.
etc etc
It's easy to be cynical, but there are very significant benefits to the vendor of this model. There can also be benefits to the customer too.
In addition, in my experience when you deal with a big enterprise customer you are contractually committed to providing "transition assistance" when the contract ends (even if your company goes bust) and returning data in mutually-agreed open formats. So vendor lockin doesn't really apply eithre.
> Cloud Computing (and SASS even more so) is little more than just another attempt to recreate access/info monopolies
Maybe this is true of some commercial cloud companies, but "cloud" computing is much larger in scope than you make it sound. There is a whole shadow PaaS/SaaS world used primarily by various research communities, for instance, often called "grid" instead of cloud, wherein nearly everything is publicly funded and the value proposition is web-based access to data stores, HPC clusters, etc, instead of every individual hacking their own data science environment on their laptop.
What do you propose? That every company reinvent the wheel or host everything locally even if it’s not their core competency? Every company has to decide what its “unfair advantage” is and concentrate on that.
If you're emailing links to Google docs and you expect them to last for 5 years then you're doing it all wrong. Google Docs are great for collaboration, authoring, review, publication, but durable publication or archiving is a different use case that only a permanent, self managed, URI based solution can deliver. I guess that's part of what you're alluding to.
This is the 3rd time in a week a "Local-First software" overview has been submitted and the 2nd time it's made the front page here. I'm pretty surprised about that because I'm about to release a local-first, offline-first, option for an app I make.
This article also quickly moves past "local-first" software to conflict resolution which, in my opinion, is a distinctly different issue. It's certainly not reason enough to hold off offering users a local-first option.
At this point I believe that since it can be done it should be done. I'll even go so far as to say it's a necessity. At some point users will understand it's a necessity and demand it. All that really needs to happen to convince them is one big incident where they lose access to their data for an extended period of time, or worse yet, lose all their data forever, and it won't matter why or how.
Aside from that, as more app makers start offering local-first options and users begin to see the benefits of that they will begin to demand it. That could take some time, but I expect it's inevitable.
There are other benefits to a local-first approach for developers. Take a "Contacts" app for example. If we have a standard for saving contacts data on the client side that any app could access this would give users and developers options to create and use new apps and features that all use the same data.
CouchDB & PouchDB.js provide a pretty solid and easy way to do this right now. Installed on the user's desktop PC, CouchDB provides the missing link to a robust client side web app runtime environment.
There may be other ways of achieving this right now, but I am not aware of them.
remoteStorage is pretty cool, and it's a shame that neither it nor something like it has really taken off yet. The spec has some rough edges, though—in particular the protocol requires a smart server to handle the network requests, when it should be fairly straightforward to define a "static" profile that can turn most commodity dumb hosts (Neocities, GitHub Pages, etc.) into a user store. I'm convinced that this seemingly minor design tweak would give remoteStorage a new life and cause it to spread like wildfire.
The spec gets periodically refreshed/resubmitted. It last happened a couple of months ago and is set to expire at the end of the year.
That site is still under construction but there is a link to a demo of the app there. It doesn't run on a local CouchDB though, it uses the browser's IndexedDB.
You only change one line of code to use the IndexedDB, the cloud based CouchDB, or the locally installed CouchDB.
I did make a very simple demo of a "Rich Text Editor" app that runs on a CouchDB installed on your desktop pc though. After you've installed CouchDB and created an "Admin User" and password this page configures a user and a DB on your CouchDB:
Right now I just point them at a different web page. Basically I have a local.html and a cloud.html that call a different .js config file that points to either their local CouchDB or the one on my web server.
> If we have a standard for saving contacts data on the client side that any app could access this would give users and developers options to create and use new apps and features that all use the same data. [...] There may be other ways of achieving this right now, but I am not aware of them.
This is the content providers design that was heavily touted in the early-ish days of Android.
I think there's probably a market for a personal cloud, which probably sounds dumber than personal computer did in the 80s. What I mean by that is a computer somewhere in the garage, like a furnace, with enough compute and storage to drive all the devices and appliances in a house. In this model, devices do not have CPUs or memory, only input/output and a network chip.
The way this would work is for the computer in the garage to have the ability to divide itself arbitrarily into VMs for each purpose, with an ecosystem of images designed for things like fridges and gaming consoles. It should be possible to add or upgrade compute to the device in a hot swapped fashion, and because it doesn't have to be in a thin tablet, it could be easily cooled.
Along these lines, I’m astounded that the “selfhosted” subreddit has almost 90k subscribers (for comparison, “Microsoft” has 112k and “FigmaDesign” has 2.5k):
Interestingly, the solution to cloud software data ownership seems to be to use a self-hosted alternative, rather than use a non-Cloud solution like I would have expected.
I wonder if there would be market for community clouds, or neighborhood computes? Imagine that a new apartment building comes bundled with a server room in the basement. Every dweller gets compute/storage there. This could serve as edge cache for services like Netflix/YouTube, as well as for the ecosystem you describe.
I once imagined that homomorphic encryption would allow people to store data in their personal/neighborhood clouds and have third party SaaS code operate on that data locally. But I've recently been made to understand that homomorphic encryption would also allow companies to fully close off any access to data beyond what a program/service wants to give out, and unfortunately I get the feeling that the market will prefer the latter over the former.
Could be. You could also implement it for smaller businesses. I think another possibility is to sell excess compute back to some decentralized cloud, the same way you could sell excess solar power back to the grid.
I would love a hardware/software solution that makes it easily to backup my data (ideally with integrations to Google Takeout, Facebook, etc.). Perhaps it exists already?
Edit: of course local-first does not mean merely "backup", but instead the (redundant) hardware serves as a primary data store. I would welcome that as well!
I skimmed the desired qualities, the review of current tools, and finally the software centric approach to achieving the stated goals.
While we can reasonably expect software elements in any proposed solution, the hardware and physical elements of distributed computing may provide a far simpler pathway and likely will permit much greater reuse of existing proven software approaches.
For example, all future multi-unit residences could come with 'data center' along with the boiler, or possibly the actual units will host this equipment along with their air conditioning units. All your cloud apps can now point to this cloud. I don't see any fundamental reason why 'data center' can not become a modular utility unit, coming in domicile, commercial, and industry grade flavors.
In my view, the pure software solution approach to the 'modern informaton society' has implicit political dimensions. One of these is the concentrated private ownership and control over physical resources which are now a required substrate of modern society. I for one am not ready to accept that as 'acceptable'.
What's the point of putting an amateur-run data center in every apartment building instead of using a proper one in town? Or instead of just putting it in my unit (with off-site backup of course), since a personal data center is just a single computer?
My landlord can barely run the water and A/C; no way they can run IT.
The improved quality and reliability is worth it for the trivial latency cost.
You are assuming the only possible solutions require user maintenance.
However, your implicit point regarding income level and the range in quality of building management is valid, and successful products in this space would address it.
There is whole bunch of reasons and that bunch of reasons has name IoT.
I mean that security of such utility units would be roughly what we already see with current breed of IoT devices. To keep some kind of quality level of such devices there would have to be one big company that produces them, which does not solve the problem. If you look at IoT devices manufacturers now there is so much crap floating around because there are so many of them.
I don't think it is "evil corporations concentrating power" it is more "normal people have better things to do". If you are plumber you want to spend time fixing pipes not setting up your homepage. Putting some ad on Facebook for a plumber is perfect solution.
You can not possibly compare IoT -- I work in the space -- and server as utility appliances.
IoT is designed to work in extreme edge conditions: low power; intermittent connectivity; constrained local storage; limitations on embedded code, etc.
Further, there is currently NO financial incentive for anyone to tackle the issues necessary to take these bits of technology and make it 'home' techonlogy. We've done this for all sorts of things, including controlled combustion in the basement for heating.
You also have two strawmen here that you attack:
1 - I did not say anything about the "evil" of corporations. Simply that it is not acceptable.
2 - A "normal" person in a modern multi-residence is hardly bothered with "fixing the boiler", or "the network connection", or "the fire alarm system", any other utility tech. If you are asserting that this is "impossible" for "networking and hosting" (!!) please make the case.
The solution space is fairly permissive, with various business models to consider. It should definitely be explored.
One of those stories is having a bad password memory. One is a person conducting business on his personal account and triggering flags. Another is a business messing up.
It's a good habit to keep multiple interlocking personal email accounts from multiple providers, but being cloud-first is still obviously correct.
The No Spinners thing seems to be my professional niche. Every job I’m tasked at cleaning up a poorly performing native app. And it’s always caused by developers writing views like web apps, posting their server requests as views open and fire up a spinner to wait.
It’s not that hard to have a caching strategy. And then your native app feels like a native app.
https://martin.kleppmann.com/papers/pushpin-papoc20.pdf
You can also try PushPin for yourself: https://github.com/automerge/pushpin/
likely outdated binaries are available here: https://automerge.github.io/pushpin/
Also, we're currently in the midst of a new project called Cambria, exploring some of the consequences and data demands for a new Cambrian-era of software here: https://inkandswitch.github.io/cambria/
Plus, the value of the cloud app is not just your data, but the network effects. Like, if you've emailed links to a GDocs document, and 5 years later you decide to move to another service, those GDocs links will 404, regardless of whether you've transferred all of your data to the new service.
With local-first apps, the URL starts with you, not some some-sass-provider.com.
Cloud Computing (and SASS even more so) is little more than just another attempt to recreate access/info monopolies, essentially the same profit proposition as existed with closed source software, while pretending to be one of the cool kids and use politically more acceptable (but in this context rather meaningless) terms like Open Source and Open Standards. It may be a different generation of companies, with slightly different cultures, but they are all equally predatory in nature as the old ones.
It's going to be a rude awakening, when some of the bigger service providers will eventually fall over (which they will). Of course, everyone will blame anything and everything but their own willful ignorance, when that happens.
For some reason, "cloud computing" has become a bogeyman and therefore corporations paying for it are clueless "sheeple".
To help prevent the phrase "cloud computing" from distorting our thinking, we have to remember that companies have been paying others for off-premise computing without calling it "cloud" or "SaaS" for decades before Amazon AWS, Salesforce, etc existed.
Examples...
In the 1960s, IBM's SABRE[1] airline reservations system was the "cloud" for companies like American Airlines, Delta, etc.
In the 1980s, many companies used to process payroll in-house accounting software and print paychecks on self-owned dot-matrix printers. But most companies eventually outsourced all that to specialized companies such as ADP[2].
Companies tried to manage employees' retirement benefits on in-house software but most outsource that to companies like Fidelity[3]. Likewise, even companies that self-fund their own healthcare benefits will still outsource the administration to a company like Cigna[4]. Don't install a bunch of "healthcare management software" on your own on-premise servers. Just use the "cloud/SaaS" computers that Cigna has.
Some companies (really old ones) used to print their own stock ownership certificates and mail them out to grandma. Now, virtually every company outsources that to another company. Most companies that have Employee Stock Purchase Plans outsource the administration of it to a company like Computershare[5].
The major difference with "cloud" terminology taking hold is that services like AWS is offering generic compute (EC2) and non-vertical industry solutions. Otherwise, the so-called "cloud" has been going on for decades. Amazon AWS made "cloud" really convenient by allocating off-premise resources via a web interface (dashboard or REST api) instead of calling a salesperson from IBM/ADP/Fidelity/Cigna/etc.
That doesn't mean it's always correct to buy into everything the cloud offers. Pick and choose the tradeoffs that make financial sense. Netflix got rid of their datacenters and moved 100% of the customer billing to the cloud. But Dropbox did the opposite and migrated from AWS to their own datacenter. They're both correct for their situations.
>, when some of the bigger service providers will eventually fall over (which they will).
The big established vendors like AWS, Azure, and GCP ... all have enough business that they will be around for decades. If anybody will exit, I'd guess it would be the smaller players like Oracle Cloud.
[1] https://en.wikipedia.org/wiki/Sabre_(computer_system)#Histor...
[2] https://en.wikipedia.org/wiki/ADP_(company)
[3] https://www.fidelityworkplace.com/
[4] https://www.cigna.com/assets/docs/business/medium-employers/...
[5] https://www.computershare.com/us/business/employee-equity-pl...
For example as a small provider you'll just kill all your velocity if you try to deliver on-prem to a number of large enterprises. They will all have different upgrade policies, testing and approval policies, different approved hardware etc etc and pretty soon you will suffer a death from 1000 cuts and not be able to deliver anything.
For example, I had a client once say to me that we had to replace postgres in our stack with Oracle because that was their approved database. Even though we had to support the stack. I had a client delay us by 6 months because they decided to order super bleeding-edge network cards (which I repeatedly said we didn't need) then these cards turned out to take ages to deliver, when they arrived they didn't work with the "corporate approved" version of linux they insisted on using for another couple of months etc.
In a saas you don't have to deal with any of those things. You do the one-off (painful) third-party vendor approval process, go through all the infosec audits etc but after that you own the stack and can run it the way you want.
If you want to change the hardware layout and it's on prem you have to go cap-in-hand to the client and get them to stump up cash, wait months while physical boxes get allocated, racked up etc. If you're in-cloud, you make a change to terraform, check it in and it gets pushed out through your CI/CD pipeline.
If you want to roll changes to all your customers that's really easy as a SaaS/cloud offering whereas it's very hard if they're each on-prem.
etc etc
It's easy to be cynical, but there are very significant benefits to the vendor of this model. There can also be benefits to the customer too.
In addition, in my experience when you deal with a big enterprise customer you are contractually committed to providing "transition assistance" when the contract ends (even if your company goes bust) and returning data in mutually-agreed open formats. So vendor lockin doesn't really apply eithre.
Maybe this is true of some commercial cloud companies, but "cloud" computing is much larger in scope than you make it sound. There is a whole shadow PaaS/SaaS world used primarily by various research communities, for instance, often called "grid" instead of cloud, wherein nearly everything is publicly funded and the value proposition is web-based access to data stores, HPC clusters, etc, instead of every individual hacking their own data science environment on their laptop.
This article also quickly moves past "local-first" software to conflict resolution which, in my opinion, is a distinctly different issue. It's certainly not reason enough to hold off offering users a local-first option.
At this point I believe that since it can be done it should be done. I'll even go so far as to say it's a necessity. At some point users will understand it's a necessity and demand it. All that really needs to happen to convince them is one big incident where they lose access to their data for an extended period of time, or worse yet, lose all their data forever, and it won't matter why or how.
Aside from that, as more app makers start offering local-first options and users begin to see the benefits of that they will begin to demand it. That could take some time, but I expect it's inevitable.
There are other benefits to a local-first approach for developers. Take a "Contacts" app for example. If we have a standard for saving contacts data on the client side that any app could access this would give users and developers options to create and use new apps and features that all use the same data.
CouchDB & PouchDB.js provide a pretty solid and easy way to do this right now. Installed on the user's desktop PC, CouchDB provides the missing link to a robust client side web app runtime environment.
There may be other ways of achieving this right now, but I am not aware of them.
[1] https://unhosted.org/adventures/7/Adding-remote-storage-to-u...
[2] https://remotestorage.io/
The spec gets periodically refreshed/resubmitted. It last happened a couple of months ago and is set to expire at the end of the year.
And I outline the features on the new site for the app at https://cherrypc.com/home.html
That site is still under construction but there is a link to a demo of the app there. It doesn't run on a local CouchDB though, it uses the browser's IndexedDB.
You only change one line of code to use the IndexedDB, the cloud based CouchDB, or the locally installed CouchDB.
I did make a very simple demo of a "Rich Text Editor" app that runs on a CouchDB installed on your desktop pc though. After you've installed CouchDB and created an "Admin User" and password this page configures a user and a DB on your CouchDB:
https://cherrypc.com/app/editor/setup.html
After you created your user you're redirected to the app and prompted to login at this page:
https://cherrypc.com/app/editor/index.html
After you log in you can CRUD & print rich text documents.
It's a very simple app and all the code to make it is included in the source of those two html pages.
This is the content providers design that was heavily touted in the early-ish days of Android.
https://youtu.be/QBGfUs9mQYY?t=352
The way this would work is for the computer in the garage to have the ability to divide itself arbitrarily into VMs for each purpose, with an ecosystem of images designed for things like fridges and gaming consoles. It should be possible to add or upgrade compute to the device in a hot swapped fashion, and because it doesn't have to be in a thin tablet, it could be easily cooled.
https://www.reddit.com/r/selfhosted/
Interestingly, the solution to cloud software data ownership seems to be to use a self-hosted alternative, rather than use a non-Cloud solution like I would have expected.
I once imagined that homomorphic encryption would allow people to store data in their personal/neighborhood clouds and have third party SaaS code operate on that data locally. But I've recently been made to understand that homomorphic encryption would also allow companies to fully close off any access to data beyond what a program/service wants to give out, and unfortunately I get the feeling that the market will prefer the latter over the former.
Edit: of course local-first does not mean merely "backup", but instead the (redundant) hardware serves as a primary data store. I would welcome that as well!
- nextcloud - a system with appliance-like apps that do these sorts of things
- proxmox - a vm system that allows you to deploy VMs and containers including appliance-like templates
https://en.wikipedia.org/wiki/Nextcloud
https://nextcloud.com/
https://en.wikipedia.org/wiki/Proxmox_Virtual_Environment
https://www.proxmox.com/
While we can reasonably expect software elements in any proposed solution, the hardware and physical elements of distributed computing may provide a far simpler pathway and likely will permit much greater reuse of existing proven software approaches.
For example, all future multi-unit residences could come with 'data center' along with the boiler, or possibly the actual units will host this equipment along with their air conditioning units. All your cloud apps can now point to this cloud. I don't see any fundamental reason why 'data center' can not become a modular utility unit, coming in domicile, commercial, and industry grade flavors.
In my view, the pure software solution approach to the 'modern informaton society' has implicit political dimensions. One of these is the concentrated private ownership and control over physical resources which are now a required substrate of modern society. I for one am not ready to accept that as 'acceptable'.
My landlord can barely run the water and A/C; no way they can run IT.
The improved quality and reliability is worth it for the trivial latency cost.
However, your implicit point regarding income level and the range in quality of building management is valid, and successful products in this space would address it.
I mean that security of such utility units would be roughly what we already see with current breed of IoT devices. To keep some kind of quality level of such devices there would have to be one big company that produces them, which does not solve the problem. If you look at IoT devices manufacturers now there is so much crap floating around because there are so many of them.
I don't think it is "evil corporations concentrating power" it is more "normal people have better things to do". If you are plumber you want to spend time fixing pipes not setting up your homepage. Putting some ad on Facebook for a plumber is perfect solution.
IoT is designed to work in extreme edge conditions: low power; intermittent connectivity; constrained local storage; limitations on embedded code, etc.
Further, there is currently NO financial incentive for anyone to tackle the issues necessary to take these bits of technology and make it 'home' techonlogy. We've done this for all sorts of things, including controlled combustion in the basement for heating.
You also have two strawmen here that you attack:
1 - I did not say anything about the "evil" of corporations. Simply that it is not acceptable.
2 - A "normal" person in a modern multi-residence is hardly bothered with "fixing the boiler", or "the network connection", or "the fire alarm system", any other utility tech. If you are asserting that this is "impossible" for "networking and hosting" (!!) please make the case.
The solution space is fairly permissive, with various business models to consider. It should definitely be explored.
I get the tradeoffs. I'm not going back 10+ years.
(E.g., you can find example horror stories here on Hacker News: https://www.google.com/search?q=locked+out+of+gsuite+site%3A...)
It's a good habit to keep multiple interlocking personal email accounts from multiple providers, but being cloud-first is still obviously correct.
What could go wrong...
It’s not that hard to have a caching strategy. And then your native app feels like a native app.