Yeah, I would be really careful about relying on something like Aurora and planning to migrate to Cloud SQL or another vendor's managed PostgreSQL. Apps couple very tightly to implementation quirks of their database, and even the tiniest difference between implementations is going to cause problems when you migrate. (For example, we used to test our Postgres app with SQLite. We found that each query engine treats "WHERE foo = 't'" and "WHERE foo = true" differently, though an exciting production outage. Postgres does what you'd expect, SQLite does not. That was the last time I'll ever use a different database for production and testing.)
If you are going to be cloud provider independent, then you have to bring your own foundational services that run independently of the cloud provider. Minio instead of S3, CockroachDB instead of managed Postgres, etc. (Honestly, I would feel safe switching between plain Postgres on RDS and some other provider's equivalent. Don't use any extensions, though. S3 I would probably try and get away with as well, since everything provides an S3 compatible API these days. But definitely not Aurora.)
You also need to think about your other cloud provider integrations. Do you programatically change DNS? You'll need something that can handle any provider. IAM? You'll need something that handles both providers. Even at the Kubernetes level, things work differently. Compare Google's L7 load balancer to Amazon's ALB, for example. You might create them both with "kubectl create service loadbalancer ...", but the semantics are going to be different. Persistent volume types also vary between cloud providers; your StorageClass "default" is going to have different IOPS characteristics, for example, and that can easily burn you.
I am willing to bet that anyone using the cloud for something serious will have some hiccups migrating away, even if they're using Kubernetes. It's all the stuff that's not in Kubernetes that will get you!
Running your own Kubernetes cluster is complex and deep. Giving a yaml file to EKS and suddenly you have infrastructure is not complex at all. Its actually pretty sweet.
AWS is much better at the dumb, commodity services like s3, ec2 and rds (that are actually pretty smart) than they are at something like redshift, sagemaker or similar.
They're "good enough" for most users that might not care about super high availability, devex etc. As long as its cheap and solves a business problem, it will win.
SDK design problems not unique to AWS. Companies that startup dedicated language SDK teams eventually find those teams needing to justify their existence so you end with needless new versions. Some better than others
Like a sibling comment I’d also like to know what was bad about the SDK. It seems.. alright? That alone is good enough for me considering its implementation is autogenerated from something else.
It also has tree-shakeable imports which v2 doesn’t have, and a very extensible looking middleware architecture which I’ve yet to need but good to know that it’s there if I do. The auto-generated docs leave something to be desired, but are passable, with a few extra clicks here and there.
Wow... I had not touched the AWS SDK with JavaScript in a long while. This is actually very surprising. It looks significantly different for what seems like a marginal improvement from V2.
- Auto-generated Typescript documentation that's unintelligible (Lord Jesus help me).
- Weird naming of imported functions (e.g., appending "Command" to the end of imports).
- The need to create instances of requests (commands) and then "send" them to the API vs. just passing an object of options. The irk here is that it breaks the affordances around API design.
The gold standard I reference for API design is Stripe. They have just as complex an API as anyone, and yet, they don't go out of their way to overcomplicate things. For example, this is stupid simple and obvious:
const charge = await stripe.charges.create({
amount: 2000,
currency: 'usd',
source: 'tok_amex',
description: 'My First Test Charge (created for API docs)',
});
---
Not only is the API descriptive of what it's doing by design, they had the wherewithal to explain where the missing piece can be found. Contrast that with Amazon where they give you a half-assed example and you have to go on a goose chase to find the params/options for the request. Why?!
The above changes look like a wank or job protection, not any form of well-considered improvement. Couple that with turning the docs into a maze (as opposed to just saying "here's the function, here are the required/optional params, this is what a response looks like) and you realize that the person responsible for this just greenlit it without any question as to "why are we doing this?"
Yes, I'm being an asshole about it. But this is now going to inform API design for other people as it's representative of "what the big guys do." Rinse and repeat that thinking for a decade or two and everything turns into a f*cking Rube Goldberg machine for absolutely zero reason.
The value I see in AWS isn't that anything is done particularly well. It's that there's enough of it that I can get most all of what I need in one place.
The value of curation becomes greater when the supply is there. What comes after curation, I wonder?
I recently experienced this with setapp. It’s a $12/mo subscription that offers me a couple hundred apps and utilities to choose from. When I needed more insight into Wi-Fi signal issues, I just went with their rec. same for a pomodoro timer, or clipboard enhancer. Pull up their “App Store” on my Mac, type Wi-Fi into the search, and use what they offer. So nice.
AWS isn't a single "thing", or perhaps, it's only a bundled brand. There are many teams, sub-companies, divisions, groups all working on their own parts. Most have to follow a similar set of rules and some attempts are made to make it cohesive, but in the end it's not a single minded entity with a single set of parameters it follows for every service offering.
You can run your stuff on AWS without ever touching compute, storage and networking yourself, and you can also run your stuff on AWS by only touching compute, storage and networking and nothing else. You can guess your risk wrong and do everything as a root user in a single account, and you can create an account vending machine in a well-architected organisation with policies in place to prevent the huge amount of footguns lying around everywhere from going off.
There are so many ways you could be using it that trying to put a single label on it seems pointless to me. Just like the only constant you can count on is 'change', the only 'proper' usage or terminology in terms of AWS is what you know, what you need, and what you ended up doing. This includes shooting yourself in the foot with CloudFormation because you accidentally thought CDKs are your friend and JavaScript was the best tasting language for your goal, or thinking that clicking around in the console would be a good way to get your department of developers up and running.
Ok, but by that logic it would be impossible to make any kind of analysis about AWS (or other cloud providers) in the sense that the author attempted to make.
In the end I believe, the question is about lock-in and control. In a true "dumb pipe" offering, you could just take your software elsewhere and run essentially the same code without changes. Meanwhile, if you use various proprietary APIs, this is not the case. I'd argue, even if the APIs have open source implementations that you could theoretically use elsewhere, it's not clear how viable this would be in practice.
I think AWS does not become a "dumb pipe" only because you could theoretically use it like that - at least not if doing so would require you to go "against the grain" of the platform the whole time and would force you to only use a fraction of what you paid for.
You definitely can, but I'm not sure what the point would be. Say you are a commercial entity and you need to not run everything vertically integrated. You go look for external entities that can do some of the work for you, preferably specialised in such a way that it efficiently delivers whatever you need. At that point, this whole 'dumb pipe' concept is just a column in a crappy spreadsheet somewhere that might make a minor impact on the choice being made. Spending 10 million extra each month because an engineer liked the sound of a dumb pipe better with a vague promise of easier migrations isn't realistic.
If someone truly wants to not be dependent, you need to identify to what degree you want that, and then end up paying a lot more to get there, usually ending up multi-could. Even three 'dumb pipes' aren't similar enough to create a 'write once' IaC and application definition. The closest you can get is not using FaaS-type offerings and sticking to OCI containers. You'll still have to work with the IAM and network primitives each vendor requires, and even if you don't run active-active you'll still have to write every deployment to every provider you use to ensure your systems are always ready to deliver.
It's not impossible, but impractical and expensive. For most companies, the ROI just isn't there. Offloading more day-to-day and non-company-specific work is the way, and dumb pipes can't do that.
You can avoid all lock-in by ignoring all sorts of proprietary features of any vendor, and leave ENDLESS benefits and optimizations on the table for the hypothetical likelyhood that you might one day have to "free yourself".
Conversely, if you embrace those things wholeheartedly, and at some point find yourself wanting to break the relationship and do a full egress migration, even if your costs end up monumental, I would bet anything that over the time you were "locked in" you will have gained more from that cloud than whatever "one-time migration costs" you have to pay.
If your core business proposition is literally something like storage (e.g. Dropbox) where with time you will want to vertically integrate your whole technology stack rather than relying on a 3rd party, yeah avoid lock-in.
If you have a frivolous C-suite that will want to change cloud providers every 24 months depending on what discount they can negotiate on the golf course, yeah avoid lock-in.
Everyone else? Throw that word out from your vocabulary, and embrace the idiomatic capabilities of your cloud vendor.
> When there's no product differentiation, distribution wins. In the telco case, products build on top of broadband were 10x improvements over products built into broadband. YouTube, Netflix and other internet content providers could do things that cable and telephone simply couldn't. Now, competitors must differentiate on expertise, community, and developer experience.
I don’t disagree, but I think the important thing to understand is the flip side of this argument: products built on top of broadband were 10X better… and that created demand for the dumb pipe. If you wanted these new products, you had to order broadband. And then cable and telcos had local monopolies/oligopolies, so there was no need to differentiate themselves. Others created the demand and they locked customers into renewing contracts. Why be anything other than a dumb pipe in such a scenario?
I think this over-estimates the intelligence of cloud businesses, and under-estimates the motives of paying customers.
First there's the assumption that Amazon, Microsoft, Google, "get software". This doesn't really mean anything. So you're a tech company and you know how to churn out software; so what? That doesn't mean your particular tech is better than anybody else's. Amazon alone has 70,000 engineers; they didn't snag all the good ones. If you look at their code (or try to package it for a Linux distribution), it's a hot mess. Their tech is not why they're successful. They know that software is just a tool you use to sell.
These companies' major wins were for all kinds of reasons other than their technical acumen: spinning off a new business from an already-profitable one, acquiring and integrating other businesses, or keeping a moat (sorry - "platform") around a particular market or customer base. It doesn't matter if they "get software". Just look at Google Cloud; it'll probably be sunset in 2 years. Who cares if they "get software" if they suck at acquiring customers? Apple is launching their own public cloud soon, as well as a couple other small and large companies. Will they be successful? It won't be because of their technology (they're all using the same code anyway). If you want to steal customers from Amazon, you need to sell it.
Paying customers don't care if you can see the future or are "internet native". They care if you can lower their costs and get them to market quicker, if you provide stellar support, if your product is more popular (more people know how to built with it/maintain it), if you make their life easier, and if your service appears more reliable.
And AWS knows all this. They're a savvy salesman, a reliable contractor, and an industry standard. They know how to keep and grow their business, and it isn't by reducing what they sell or focusing on a narrow market. If you're the incumbent, you sell more integrated services and diversify your business, and keep pushing until a market implodes. Then you tourniquet that part of the business and pivot. But what they'll never do is strategically refocus on some smaller business sector if it means making less money.
> On the other hand, that means that services built over-the-top pay a higher cloud tax. Not sure how this one plays out.
It’s unlikely services will continue to build on top and pay a high cloud tax for two reasons:
1. It’s not safe to bet on any entity remaining at the top forever, applies to corporations being on the top of their game too.
2. Tech tends to be an upward moving pendulum as it converges on an optimal solution. Back in the day, companies owned most of their stack from top down (initial position of pendulum) and now companies own only the top part of their stack by building on top of Azure/AWS/GCP (pendulum swing). There’ll be a pendulum swing back but it’ll be an improvement over the status quo for at least some use cases.
It’s likely new technologies and regulations will make building/owning deep vertical stacks a viable option and it’ll be worth it for some use cases. For example, visual/audio stacks should be deep and owned by an entity, I suspect there’s a large margin of improvement in this area that can be attained with fine-tuned vertical stacks (e.g. https://tonari.no/ is doing some low level stuff to build a better audio/visual experience in meetings). I think companies like https://oxide.computer/ will inadvertently, collectively help make building deep vertical stacks an option on the table.
A difference between telcos and IaaS is IT is 'baked into' AWS. AWS's IT team (people, software, automation) silently becomes an extension of your team. It makes AWS more than a dumb pipe, and makes it difficult to leave AWS (generally you have to be very purposeful to avoid this lock-in).
Interestingly, considering the OP's comparison, many enterprises are also tethered to AWS via telco dumb pipes (e.g. App user -> WAN -> Private DC -> Colo facility -> Direct Connect (MPLS) -> app in VPC).
Just deploy everything on EKS and use only foundational services such as S3 and PostgreSQL Aurora. There you go: Dumb Pipes.
Edit: When I say dumb, it's more like: Dumb enough so that another public cloud vendor, such as GCP or Azure, can be swapped in.
If you are going to be cloud provider independent, then you have to bring your own foundational services that run independently of the cloud provider. Minio instead of S3, CockroachDB instead of managed Postgres, etc. (Honestly, I would feel safe switching between plain Postgres on RDS and some other provider's equivalent. Don't use any extensions, though. S3 I would probably try and get away with as well, since everything provides an S3 compatible API these days. But definitely not Aurora.)
You also need to think about your other cloud provider integrations. Do you programatically change DNS? You'll need something that can handle any provider. IAM? You'll need something that handles both providers. Even at the Kubernetes level, things work differently. Compare Google's L7 load balancer to Amazon's ALB, for example. You might create them both with "kubectl create service loadbalancer ...", but the semantics are going to be different. Persistent volume types also vary between cloud providers; your StorageClass "default" is going to have different IOPS characteristics, for example, and that can easily burn you.
I am willing to bet that anyone using the cloud for something serious will have some hiccups migrating away, even if they're using Kubernetes. It's all the stuff that's not in Kubernetes that will get you!
AWS is much better at the dumb, commodity services like s3, ec2 and rds (that are actually pretty smart) than they are at something like redshift, sagemaker or similar.
Quality declines steeply once you're off the beaten path.
The "all the things" approach that AWS has taken has led to a lot of great ideas being poorly executed, leading to a mess and poor engineering.
E.g., the new (V3) of the JavaScript SDK is such a clusterf*ck I almost can't believe they had the stones to ship it.
It also has tree-shakeable imports which v2 doesn’t have, and a very extensible looking middleware architecture which I’ve yet to need but good to know that it’s there if I do. The auto-generated docs leave something to be desired, but are passable, with a few extra clicks here and there.
"Superior" by which measure? AWS is making money hand over fist. Companies use AWS because they think it's better than alternatives.
- Auto-generated Typescript documentation that's unintelligible (Lord Jesus help me).
- Weird naming of imported functions (e.g., appending "Command" to the end of imports).
- The need to create instances of requests (commands) and then "send" them to the API vs. just passing an object of options. The irk here is that it breaks the affordances around API design.
For example, using S3 used to look like...
---
import AWS from 'aws-sdk';
const s3 = new AWS.S3({ ... });
s3.putObject({
}).promise().then((response) => { // Handle URL here. });---
Now, it looks like...
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const client = new S3Client(config);
const command = new PutObjectCommand(input);
const response = await client.send(command);
---
This example is taken straight from docs (https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clien...). config and input are not described anywhere near this example, meaning you have to go hunt for them.
---
The gold standard I reference for API design is Stripe. They have just as complex an API as anyone, and yet, they don't go out of their way to overcomplicate things. For example, this is stupid simple and obvious:
const stripe = require('stripe')('sk_test_4eC39HqLyjWDarjtT1zdp7dc');
// `source` is obtained with Stripe.js; see https://stripe.com/docs/payments/accept-a-payment-charges#we...
const charge = await stripe.charges.create({ amount: 2000, currency: 'usd', source: 'tok_amex', description: 'My First Test Charge (created for API docs)', });
---
Not only is the API descriptive of what it's doing by design, they had the wherewithal to explain where the missing piece can be found. Contrast that with Amazon where they give you a half-assed example and you have to go on a goose chase to find the params/options for the request. Why?!
The above changes look like a wank or job protection, not any form of well-considered improvement. Couple that with turning the docs into a maze (as opposed to just saying "here's the function, here are the required/optional params, this is what a response looks like) and you realize that the person responsible for this just greenlit it without any question as to "why are we doing this?"
Yes, I'm being an asshole about it. But this is now going to inform API design for other people as it's representative of "what the big guys do." Rinse and repeat that thinking for a decade or two and everything turns into a f*cking Rube Goldberg machine for absolutely zero reason.
The value I see in AWS isn't that anything is done particularly well. It's that there's enough of it that I can get most all of what I need in one place.
I recently experienced this with setapp. It’s a $12/mo subscription that offers me a couple hundred apps and utilities to choose from. When I needed more insight into Wi-Fi signal issues, I just went with their rec. same for a pomodoro timer, or clipboard enhancer. Pull up their “App Store” on my Mac, type Wi-Fi into the search, and use what they offer. So nice.
You can run your stuff on AWS without ever touching compute, storage and networking yourself, and you can also run your stuff on AWS by only touching compute, storage and networking and nothing else. You can guess your risk wrong and do everything as a root user in a single account, and you can create an account vending machine in a well-architected organisation with policies in place to prevent the huge amount of footguns lying around everywhere from going off.
There are so many ways you could be using it that trying to put a single label on it seems pointless to me. Just like the only constant you can count on is 'change', the only 'proper' usage or terminology in terms of AWS is what you know, what you need, and what you ended up doing. This includes shooting yourself in the foot with CloudFormation because you accidentally thought CDKs are your friend and JavaScript was the best tasting language for your goal, or thinking that clicking around in the console would be a good way to get your department of developers up and running.
In the end I believe, the question is about lock-in and control. In a true "dumb pipe" offering, you could just take your software elsewhere and run essentially the same code without changes. Meanwhile, if you use various proprietary APIs, this is not the case. I'd argue, even if the APIs have open source implementations that you could theoretically use elsewhere, it's not clear how viable this would be in practice.
I think AWS does not become a "dumb pipe" only because you could theoretically use it like that - at least not if doing so would require you to go "against the grain" of the platform the whole time and would force you to only use a fraction of what you paid for.
If someone truly wants to not be dependent, you need to identify to what degree you want that, and then end up paying a lot more to get there, usually ending up multi-could. Even three 'dumb pipes' aren't similar enough to create a 'write once' IaC and application definition. The closest you can get is not using FaaS-type offerings and sticking to OCI containers. You'll still have to work with the IAM and network primitives each vendor requires, and even if you don't run active-active you'll still have to write every deployment to every provider you use to ensure your systems are always ready to deliver.
It's not impossible, but impractical and expensive. For most companies, the ROI just isn't there. Offloading more day-to-day and non-company-specific work is the way, and dumb pipes can't do that.
You can avoid all lock-in by ignoring all sorts of proprietary features of any vendor, and leave ENDLESS benefits and optimizations on the table for the hypothetical likelyhood that you might one day have to "free yourself".
Conversely, if you embrace those things wholeheartedly, and at some point find yourself wanting to break the relationship and do a full egress migration, even if your costs end up monumental, I would bet anything that over the time you were "locked in" you will have gained more from that cloud than whatever "one-time migration costs" you have to pay.
If your core business proposition is literally something like storage (e.g. Dropbox) where with time you will want to vertically integrate your whole technology stack rather than relying on a 3rd party, yeah avoid lock-in.
If you have a frivolous C-suite that will want to change cloud providers every 24 months depending on what discount they can negotiate on the golf course, yeah avoid lock-in.
Everyone else? Throw that word out from your vocabulary, and embrace the idiomatic capabilities of your cloud vendor.
I don’t disagree, but I think the important thing to understand is the flip side of this argument: products built on top of broadband were 10X better… and that created demand for the dumb pipe. If you wanted these new products, you had to order broadband. And then cable and telcos had local monopolies/oligopolies, so there was no need to differentiate themselves. Others created the demand and they locked customers into renewing contracts. Why be anything other than a dumb pipe in such a scenario?
First there's the assumption that Amazon, Microsoft, Google, "get software". This doesn't really mean anything. So you're a tech company and you know how to churn out software; so what? That doesn't mean your particular tech is better than anybody else's. Amazon alone has 70,000 engineers; they didn't snag all the good ones. If you look at their code (or try to package it for a Linux distribution), it's a hot mess. Their tech is not why they're successful. They know that software is just a tool you use to sell.
These companies' major wins were for all kinds of reasons other than their technical acumen: spinning off a new business from an already-profitable one, acquiring and integrating other businesses, or keeping a moat (sorry - "platform") around a particular market or customer base. It doesn't matter if they "get software". Just look at Google Cloud; it'll probably be sunset in 2 years. Who cares if they "get software" if they suck at acquiring customers? Apple is launching their own public cloud soon, as well as a couple other small and large companies. Will they be successful? It won't be because of their technology (they're all using the same code anyway). If you want to steal customers from Amazon, you need to sell it.
Paying customers don't care if you can see the future or are "internet native". They care if you can lower their costs and get them to market quicker, if you provide stellar support, if your product is more popular (more people know how to built with it/maintain it), if you make their life easier, and if your service appears more reliable.
And AWS knows all this. They're a savvy salesman, a reliable contractor, and an industry standard. They know how to keep and grow their business, and it isn't by reducing what they sell or focusing on a narrow market. If you're the incumbent, you sell more integrated services and diversify your business, and keep pushing until a market implodes. Then you tourniquet that part of the business and pivot. But what they'll never do is strategically refocus on some smaller business sector if it means making less money.
It’s unlikely services will continue to build on top and pay a high cloud tax for two reasons:
1. It’s not safe to bet on any entity remaining at the top forever, applies to corporations being on the top of their game too.
2. Tech tends to be an upward moving pendulum as it converges on an optimal solution. Back in the day, companies owned most of their stack from top down (initial position of pendulum) and now companies own only the top part of their stack by building on top of Azure/AWS/GCP (pendulum swing). There’ll be a pendulum swing back but it’ll be an improvement over the status quo for at least some use cases.
It’s likely new technologies and regulations will make building/owning deep vertical stacks a viable option and it’ll be worth it for some use cases. For example, visual/audio stacks should be deep and owned by an entity, I suspect there’s a large margin of improvement in this area that can be attained with fine-tuned vertical stacks (e.g. https://tonari.no/ is doing some low level stuff to build a better audio/visual experience in meetings). I think companies like https://oxide.computer/ will inadvertently, collectively help make building deep vertical stacks an option on the table.
Interestingly, considering the OP's comparison, many enterprises are also tethered to AWS via telco dumb pipes (e.g. App user -> WAN -> Private DC -> Colo facility -> Direct Connect (MPLS) -> app in VPC).