I have the opposite philosophy for what it’s worth: if we are going to pay for AWS I want to use it correctly, but maximally. So for instance if I can offload N thing to Amazon and it’s appropriate to do so, it’s preferable. Step Functions, lambda, DynamoDB etc, over time, have come to supplant their alternatives and its overall more efficient and cost effective.
That said, I strongly believe developers don’t do enough consideration as to how to maximize vendor usage in an optimal way
AWS can be used in a different, cost effective, way.
It can be used as a middle-ground capable of serving the existing business, while building towards a cloud agnostic future.
The good AWS services (s3, ec2, acm, ssm, r53, RDS, metadata, IAM, and E/A/NLBs) are actually good, even if they are a concern in terms of tracking their billing changes.
If you architect with these primitives, you are not beholden to any cloud provider, and can cut over traffic to a non AWS provider as soon as you’re done with your work.
Let me explain why we’re not talking about an 80/20 split.
There’s no reason to treat something like a route53 record, or security group rule, in the same way that you treat the creation of IAM Policies/Roles and their associated attachments.
If you create a common interface for your engineers/auditors, using real primitives like the idea of a firewall rule, you’ve made it easy for everyone to avoid learning the idiosyncrasies of each deployment target, and feel empowered to write their own merge requests, or review the intended state of a given deployment target.
If you need to do something provider-specific, make a provider-specific module.