I don't think 37signals is a very strong example that this could become a trend. They are in the goldilocks zone where they're big and stable enough to save a lot of money with self-hosting, but not big enough that they can negotiate meaningful discounts with Amazon.
In reality cloud is too convenient, and the tradeoffs for self-hosting just don't make sense for the majority of companies. The talent to run your own servers with modern HA and reliability expectations has largely been consolidated into the giant cloud providers and other large companies at this point. As much as I think a lot of cloud is wasteful and more diversity would be beneficial, I don't see any meaningful change to cloud dominance on the horizon.
For most businesses, this is a fiction. Typical users are just as accustomed to outages and feature failures as they were twenty years ago, which is to say that they swear for a couple hours but almost never migrate to competitors just because a service goes down now and again. They anticipate that the other service will be down just as much, and the friction of moving to a new service means there's got to be a real gain, not just a speculative one. Even in retail, users typically come back to where the catalog and price is right, and where they've had good post-sales experience and loyalty points, just like when a shop around town in unexpectedly closed because of a water leak or whatever.
Further, it's unclear that cloud vs managed hosting vs experienced self-hosting vs fairly naive self-hosting has a particularly different net reliability or availability profile for most use cases. That was the sales pitch of the cloud and it became acargo cult saying for a while, but it's very hard to demonstrate in a scientific way. It turns out cloud services fail plenty and of course the vast majority of services hosted in other ways do just fine.
There are exceptions to all this, but they're rare, not routine.
Agreed, but this matters little if you can't hire people to manage your on-prem infra. 25 years ago everyone had a sysadmin and a DBA, now they are outnumbered by "devops engineers" who just know AWS and K8s.
I think one key requirement is a strong centralized directive on allowed technologies. If every team is allowed to run whatever they want, you’re likely going to struggle to run it on-prem, if for nothing else it’s unlikely that a dev team has the knowledge or resources to operate and maintain Kafka, for example.
Running HA services on your own is not as difficult as it’s made out to be. Anything stateless can just be another endpoint behind a load balancer. Stateful things like RDBMS require more thought (not only for HA, just in general), but this is also a solved problem.
The other thing this requires is a more careful and performance-oriented approach to development. “It works, ship it” will likely wind up consuming far more resources than you’d like. Taking the time to profile the code – which can be done in CI – and addressing hot spots will greatly help.
> The talent to run your own servers with modern HA and reliability expectations has largely been consolidated
I kind of learned how to do all of this from Homelab communities. Claude 3.5 does really well with eliminating the toil of configuration documentation. If anything people vastly underestimate their capability to do this stuff themselves.
That said, AWS RDS is a unique product that people will not migrate from. As long as Postgres developers and ecosystem continue to work for Amazon for free, companies will be overcharged by it.
Yes! Moving past “I want to run Plex” is possible and encouraged.
> toil of configuration documentation
IMO, if you don’t understand what you’re configuring and why, you’re not really any better off. And especially with the Linux community, if you can’t demonstrate that you’ve personally read docs and explain what you’ve tried, you’re gonna have a bad time.
I hope so. We were considering migration from on premise to AWS or GCP. Did a couple of simulations, went with renewing hardware for our on premise. It was the right choice. Storage alone is almost 3 times more expensive on cloud, compared to a netapp appliance with financing...
Stil using GCP for couple of use cases (e.g. BigQuery, Vertex), some backup infra, but 95% works on our hardware in rented racks.
Only in that the growth of cloud migrations might slow. The cloud is a great option for a lot of workloads and companies. Cloud native applications and services can be cost effective. Lift and shift is the most expensive way to go and can end up costing more than on-premise.
Any views or experiences evaluating OpenStack instead of one of the big ones AWS/Azure/GCP? OpenStack has a bad rep due to added complexity and limited developer tools that may lead to ultimately higher TCO but I wonder if this similar to what Linux was like roughly pre-2005 before becoming commercially robust and refined enough to replace many corporate-level server operating systems.
Linux was a corporate-level operating system as far back as the mid-90s. It was the late 90s when it started getting enterprise software for example https://en.wikipedia.org/wiki/Oracle_Database released Oracle 8 was released for Linux in 1997.
Enterprise Linux was getting going for real in the late 1990s but in my view it was more 2005-ish that it became "mainstream" in these sphere. Sun Computer for example started to support Linux in 2006 and was a Hail Mary to try to save itself as SunOS was being eaten away by Linux.
Redhat Inc became part of the Nasdaq-100 in 2005.
I make this comparison as the question is whether OpenStack still has the potential to become a full go-to alternative in the way that clients consider closed cloud systems from AWS/GCP/Azure as substantial equivalents.
I would like to be able to use commodity hosted compute and storage and then build all the fancy stuff on top using open-source components. The generic providers like Linode are commodity at this point so this seems like the best of both worlds.
I have no confidence in my ability to configure Postgres on K8s with backups, however.
I think so, but it's also funny to see some companies (my current included) struggling to get there to replace their on prem private Cloud, driven in large part by the VMware Broadcom situation.
In reality cloud is too convenient, and the tradeoffs for self-hosting just don't make sense for the majority of companies. The talent to run your own servers with modern HA and reliability expectations has largely been consolidated into the giant cloud providers and other large companies at this point. As much as I think a lot of cloud is wasteful and more diversity would be beneficial, I don't see any meaningful change to cloud dominance on the horizon.
For most businesses, this is a fiction. Typical users are just as accustomed to outages and feature failures as they were twenty years ago, which is to say that they swear for a couple hours but almost never migrate to competitors just because a service goes down now and again. They anticipate that the other service will be down just as much, and the friction of moving to a new service means there's got to be a real gain, not just a speculative one. Even in retail, users typically come back to where the catalog and price is right, and where they've had good post-sales experience and loyalty points, just like when a shop around town in unexpectedly closed because of a water leak or whatever.
Further, it's unclear that cloud vs managed hosting vs experienced self-hosting vs fairly naive self-hosting has a particularly different net reliability or availability profile for most use cases. That was the sales pitch of the cloud and it became acargo cult saying for a while, but it's very hard to demonstrate in a scientific way. It turns out cloud services fail plenty and of course the vast majority of services hosted in other ways do just fine.
There are exceptions to all this, but they're rare, not routine.
Running HA services on your own is not as difficult as it’s made out to be. Anything stateless can just be another endpoint behind a load balancer. Stateful things like RDBMS require more thought (not only for HA, just in general), but this is also a solved problem.
The other thing this requires is a more careful and performance-oriented approach to development. “It works, ship it” will likely wind up consuming far more resources than you’d like. Taking the time to profile the code – which can be done in CI – and addressing hot spots will greatly help.
I kind of learned how to do all of this from Homelab communities. Claude 3.5 does really well with eliminating the toil of configuration documentation. If anything people vastly underestimate their capability to do this stuff themselves.
That said, AWS RDS is a unique product that people will not migrate from. As long as Postgres developers and ecosystem continue to work for Amazon for free, companies will be overcharged by it.
Yes! Moving past “I want to run Plex” is possible and encouraged.
> toil of configuration documentation
IMO, if you don’t understand what you’re configuring and why, you’re not really any better off. And especially with the Linux community, if you can’t demonstrate that you’ve personally read docs and explain what you’ve tried, you’re gonna have a bad time.
Deleted Comment
Redhat Inc became part of the Nasdaq-100 in 2005.
I make this comparison as the question is whether OpenStack still has the potential to become a full go-to alternative in the way that clients consider closed cloud systems from AWS/GCP/Azure as substantial equivalents.
I have no confidence in my ability to configure Postgres on K8s with backups, however.