"Billing alerts" are a joke, give us hard spend limits. Then offer a way to set those limits during onboarding.
Building a business on blank cheques and accidental spends is shady. It's also a large barrier to adoption. The more times devs see reports like, "I tried [random 20-minute tutorial] and woke up to a bill for my life's savings and luckily support waived the fee this one time but next time they're coming for my house", the less they'll want to explore your offerings.
It's not just AWS. I think there are only two types of cloud providers: The ones like AWS and DigitalOcean that shift the risk to the customer and the ones that offer shady "unlimited" and "unmetered" plans.
Neither is what I want. I wish there was a provider with clear and documented limits to allow proper capacity planning while at the same time shifting all the availability risk to the customer but taking on the financial risk. I'd be willing to pay a higher fixed price for that, as long as it is not excessive.
> It's not just AWS. I think there are only two types of cloud providers: The ones like AWS and DigitalOcean that shift the risk to the customer and the ones that offer shady "unlimited" and "unmetered" plans.
Actually there is a third category, those who care. I will grant you it is a rare category but it is there.
One example name: Exoscale[1]
Swiss cloud provider, they offer:
(a) hard spend limits via account pre-pay balances (or you can also have post-pay if you want the "usual" cloud "surprises included" payment model).
(b) good customer service that doesn't hide behind "community forums"
Sure they don't offer all the bells and whisles range of services of the big names, but what they do do, they do well.
No I am not an Exoscale shill, and no I don't work for Exoscale. I just know some of their happy customers. :)
It seems that automatic billing is something that cloud providers invented. For example, home Internet providers or mobile providers usually use prepaid plans, where they simply stop the service once you ran out of money (but you can connect your card account if you trust them). So you cannot get charged arbitrary amount for home Internet, and for mobile unless you travel.
Hetzner also for CPU use, and last time I checked the traffic would drop to slower bandwidth if you pass some limit. I think they still charge you, though.
Hard spend limits are an anti-feature for enterprise customers, who are the core customer of AWS. Almost no level of accidental spend is worth creating downtime or data loss in a critical application.
Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.
Hard spend limits might make sense for indie / SME focused cloud vendors though.
> Hard spend limits are an anti-feature for enterprise customers
Yada yada yada, that's the same old excuse the cloud providers trot out.
Now, forgive me for my clearly Nobel Prize winning levels of intellect when I point out the following...
Number one: You would not have to turn on the hard spend limit if such functionality were to be provided.
Number two: You could enable customers to set up hard limits IN CONJUNCTION WITH alerts and soft limits, i.e. hitting the hard limit would be the last resort. A bit like trains hitting the buffers at a station ... that is preferable to killing people at the end of the platform. The same with hard spend limits, hitting the limit is better than waking up in the morning to a $1m cloud bill.
> Hard spend limits are an anti-feature for enterprise customers, who are the core customer of AWS. Almost no level of accidental spend is worth creating downtime or data loss in a critical application.
Great, so they don’t have to use the feature?
That excuse was a great excuse when AWS was an MVP for someone. 20+ years later… there is no excuse.
> Almost no level of accidental spend is worth creating downtime or data loss in a critical application.
That feels like a bit of a red herring — if that was their ethos, then you'd _have_ to choose burstable/autoscaling config on every service. If I can configure a service to fall over rather than scale at a hard limit, that points to them understanding different their use cases (prod vs dev) and customer types (start-up vs enterprise).
Additionally, anytime I've worked for an enterprise customer, they've had a master service agreement set-up with AWS professional services rather than entering credit card info, so they could use that as a simple way to offer choices.
> Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.
"Hazardous" feels like the wrong word here - if your customer decides to enact a spend limit it should not be up to you to decide whether that's good for them or not.
So have a “Starter” account with spend limits then, don’t understand how an individual is supposed to learn this stack and actually sleep at night without waking up panicking something has been left running.
In the olden days if we spotted a customer ringing up a colossal bill, we would tell them. These huge Amazon bills are fast but still multiple days. They can trivially use rolling-projection windows to know when an account is having a massive spike.
They could use this foresight to call the customer, ensure they're informed, give them the choice about how to continue. This isn't atomic rocket surgery.
"Oh but profit" isn't an argument. They are thousands of dollars up before a problem occurs. The only money they lose is money gained through customer accident. Much of it forgiven to customers who cannot afford it. It's not happily spent. They can do better business.
Make "unlimited spend" an opt-in. That way users who have explicitly chosen this and agreed to the terms can't then complain to support to try and get the bill waived.
> Almost no level of accidental spend is worth creating downtime or data loss in a critical application
But not all applications are critical, and the company deploying those applications should be able to differentiate between what's critical and what's not. If they're unable to, that's their fault. If there's no option to set hard limits, that's AWS' fault.
OTOH for example the default quota of 55PB (yes, go check it) for your daily limit of data extraction from GBQ is funny, until you make a costly mistake, or some forked process turns zombie.
This is predatory practice, that I can't set up MONEY limits for cloud services.
> Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.
This is an interesting point and something I can totally imagine happening! I guess if you have fixed spending limits in a large enough organisation, you lose some of the benefit of cloud infrastructure. Convincing a (metaphorically) remote Finance department to increase your fixed spending limit is probably a tougher task than ordering a load of new hardware!
> Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools
Tell me you're Shadow IT without telling me you're Shadow IT.
I know legitimizing shadow IT is still the value proposition of AWS to a lot of organizations. But it sucks if that's the reason the rest of us can't get an optional feature.
It's been mentioned several time in HN comments that the AWS billing code is a giant pile of spaghetti and there is generally a lot of fear around making big changes to it.
That's been one of the more interesting inside baseball facts I've learned here.
Completely agree. Having worked previously for ... (humans?) ... I can authoritatively theorize it would be fixed in a jiffy if it weren't making them bucketloads of money.
To make my case: just ponder the opposite: "What would an honest version of AWS do?". They would address the concerns publicly, document their progress towards fixing the issue, and even try to determine who was overcharged due to their faulty code, and offer them some compensation.
"We're too big to fix our own code" is, sadly, taken from the MS playbook (IIRC, something like that was made public after the breach of MS manager mailboxes after the whole Azure breach fiasco that was discovered by, IIRC, the DOJ that paid to have access to logs).
I've also shot myself in the foot with various APIs, e.g. I racked a 3k bill in a month with Google APIs for my side project, just because I wasn't checking the usage, I didn't think I was using that much. This is more of my fault I guess, but luckily support waived that fee for me after some back and forth. Also I'm not from the US so this bill is larger for me compared to US folks. But honestly also Google APIs are pretty expensive. For hosting my side projects I use a single dedicated DO droplet for $320 a month, where I have 40+ docker containers.
I have a personal account that I'm meticulously careful about (but still terrified of).
I also have an account with L̵i̵n̵u̵x̵A̵c̵a̵d̵e̵m̵y̵ A̵C̵l̵o̵u̵d̵G̵u̵r̵u̵ PluralSight: and while the courses are very variable (and mostly exam cramming focused) it has their Cloud Playground as a super nice feature.
I get four hours to play around with setting things up and then it will all get automatically torn down for me. There's no cloud bill, just my subscription (I think that's about $400pa at the moment - can't check right now as annoyingly their stuff is blocked by our corporate network!) It has a few limitations, but none that have been problems for me with exploring stuff so far.
Asking Amazon to do something makes little sense. Create laws that force Amazon, and all the rest, to respect their users money. By default, corporations will do what makes them money, not what is ethical or good for the economy.
I can see the value in a _choice_ between billing limits and billing alerts, for those customers who don’t want their resources ever to be forcibly shut down, but you’re right in saying that choice should be front-and-centre during account creation.
The main question to me is: how the hell could two Open Search domains cost +$1k a month in the first place!?
AWS prices are ridiculous. I pay OVH $18/mo for a 4-core, 32 GB RAM, 1 TB SSD dedicated server. The cheapest on AWS would be r6g.xlarge, which costs $145/mo. Almost 10x.
Yes, AWS hardware is usually better, but they give me 4 "vCPUs". OVH gives me 4 "real" CPU cores. There's a LOT of difference. E even if my processor is worse than AWS', I still prefer 4 real CPUs than virtual ones, which are overbooked by AWS and rarely give me 100% of their power.
OVH gives me 300 Mbit, while r6g.xlarge gives "up to" 10 Gbit. But still, 10x? 300 Mbit gives me ~37 mb/s. I use a CDN for large stuff: HTML, images, JS, anyways...
There are certainly cases where AWS is the go-to option, but I think it's a small minority where it actually makes sense.
You have reached your Configured Maximum Monthly Spend Limit.
As per your settings we have removed all objects from S3, All RDS Databases, All Route53 Domains, all ESB volumes, all elastic IPs, All EC2 instances and all Snapshots.
Please update your spend limit before you recreate the above.
A compromise solution to this could be to block creation of new resources if their monthly cost would exceed the monthly limit, unless the customer increases the limit.
It wouldn’t solve the problem for usage-based billing, but it would have solved the problem here.
For an AWS account dedicated purely to experimentation that would be fine, though.
Having one AWS account where you actually run stuff, and one that follows the rule of "if it can't be paved and recreated from github, don't put it there" is exactly how a lot of people do it anyway.
I don't think it's particularly obvious or necessary. AWS makes its money on big enterprise customers who probably don't want or need this feature. Hobbyists learning AWS in their spare time are a rounding error on AWS revenue.
I would bet that the reason they don't implement it is not that they're being "shady" but because they don't care about the hobbyists and personal projects and implementing hard spending limits would be a huge, complicated feature to implement. And even if they did put in the huge effort to do it, individuals would still manage to not use it and the steady trickle of viral "I accidentally spent X thousands bucks on AWS" stories would continue as usual.
Not really. Do you think that this is trivial at AWS scale? What do you do when people hit their hard spend limits, start shutting down their EC2 instances and deleting their data? I can see the argument that just because its "hard" doesn't mean they shouldn't do it, but it's disingenuous to say they're shady because they don't.
> the less they'll want to explore your offerings.
Honestly? All the better. There are obviously use cases where AWS is the right tool for the job but it's extremely rare. It's coasting on hype and somehow attaining "no one was ever fired for buying IBM" status.
could you set a limit to the credit-card associated with the cloud service? Or would it still create costs after the limit has run out, which they would collect in other ways?
No, setting a limit doesn't work. They will still try to charge you. A friend had a charge for $0.35 running for a year after his credit card expired. They closed his account later but would definitely come after you if the amount was significant.
I know it’s minor in comparison, but I will never use AWS again after running up a $100 bill trying to get an app deployed to ECS. There was an error (on my side) preventing the service from starting up, but cloud waatch only had logs about 20% of the time, so I had to redeploy five times just to get some logs, make changes, redeploy five more times, etc. They charged me for every single failed deploy.
After about two days of struggling and a $100 bill, I said fuck it, deleted my account and deployed to DigitalOcean’s app platform instead, where it also failed to deploy (the error was with my app), but I had logs, every time. I fixed it in and had it running in under ten minutes, total bill was a few cents.
I swore that day that I would never again use AWS for anything when given a choice, and would never recommend it.
I've only used Azure and it looks like ECS is equivelent to Azure Container Apps. I found their consumption model to be very cheap for doing dev/test. Not sure what it is like for larger workloads.
I think technically I was just being charged for the container host machine, but while each individual deploy only lasted a minute or so, I was being charged the minimum each time. And each new deploy started a new host machine. Something like that anyway, it was a few years ago, so I don't remember the specifics.
So I can understand why, but it doesn't change that if their logging hadn't been so flaky, I should have been able to fix the issue in minutes with minimal cost, like I did on Digital Ocean. Besides, the $100 they charged me doesn't include the much more expensive two days I wasted on it.
Yes I believe that is the equivalent to ACA. I use ECS in prod and its incredibly cheap and efficient. Like all things it requires a little legwork to make sure its the best fit. They just charge for the underlying machines not the deployment itself.
I gave up on AWS when I realised you can’t deploy a container straight to ec2 like you can on GCP. For bigger things, yeah the support’s better, for anything small to mid GCP all day. Primitives that actually make sense to how we use containers and such these days. And Bigquery
For containers, you don't want EC2, you want ECS, possibly even Fargate depending on your use case. They're different compute primitives based on your needs.
There isn't a boxed product like Bigquery, but the pieces are all there - DynamoDB, Athena, Quicksight...
This seems like a glaring bug in the scripts run by that `npx` command. The author is correct, the scripts should 100%:
- Choose the lowest cost resource (it's a tutorial!)
- Cleanup resources when the `delete` subscript is run
I don't think it's fair to expect developers to do paranoid sweeps of their entire AWS account looking for rogue resources after running something like this.
If a startup had this behavior would you shrug and say "this happens, you just have to be paranoid"? Why is AWS held to a different standard by some?
> do paranoid sweeps of their entire AWS account looking for rogue resources
That's the thing that annoys me the most about AWS. There's no easy way to find out all the resources I'm currently paying for (or if there's a way, I couldn't find it).
Without an easy to understand overview, it feels like I don't have full control of my own account.
You can set up daily or hourly cost and usage reports on the account. I built a finops function based on it, feeding the data into a Postgres db. Make sure to select incremental updates, if not you’ll en up paying for tb of s3 storage.
What about the billing dashboard? You can break it down by service and say CPU or memory, or tags if you use them. That has always given me good enough insight into where my client's money is being spent. I'm not sure it's totally realtime, but certainly daily.
BTW I'm a supporter of spending caps, not saying this should be the only way.
Every so often, I'd get a random bill from AWS totaling to a few cents. No idea where it comes from and it's not worth the non trivial effort to find out about it. Just another reason I avoid AWS unless necessary.
My first line of research when I have to use something new is: Can I get a fixed bill every month? What happens if I use more than that, can I limit surprises? If not, I will find something else. We are also very careful with building us into "free" google services after the map surprise a few years ago. That cost us a lot of money in the end.
Is there even a simple way of listing all the existing resources in an AWS account? I’ve always had to check service by service, region by region. It’s tedious and error-prone.
Cost and usage reports will show you what is being paid for. Then there are resources that won’t show up on that so I have used aws:config to pull down other resource lists and finally you can cross both reports to more less find everything.
I thought the tag editor was where one could get a comprehensive inventory of account resources? (Unable to check as I don't currently have easy access to the AWS console)
Amazon earns an easy $1000, it is not a bug but a feature. Even if they think that it is a bug it is going to be pretty low compared to anything else that hits THEIR bottom line.
> I don't think it's fair to expect developers to do paranoid sweeps ...
Agree, it isn't fair. I think it's sensible though. When creating anything on AWS I always behave like AWS is an hostile financial institution gone rogue.
I've been putting off digging into AWS for years now, and it's because of stories like these. There really should be a standardized training course that requires no credit card info and lets people experiment for free.
Instead they have some pencil pushers calculating that they can milk thousands here and there from "user mistakes" that can't be easily disputed, if at all. I'm sure I'm not the only person who's been deterred from their environment due to the rational fear of waking up to massive charges.
It is very unusual for AWS not to issue refunds in situations like this, so I don't think it's a function of them finding an edge to milk thousands from user mistakes. More likely they've found that issuing refunds is less onerous than it would be to provide accurate and cheap tutorials.
Perhaps that does not excuse the behaviour but AWS reversed a $600 charge I incurred using AWS Textract where the charges were completely legitimate and I was working for a billion dollar enterprise.
I accidentally pushed an AWS key to a public repo, and by the next day, had like $50k in charges from crypto miners. AWS reversed the charges, with the only condition that we enable some basic security guardrails that I should have had in place to begin with.
> It is very unusual for AWS not to issue refunds in situations like this
...when asked to. But what percentage of mistakes like this end up just being "eaten" by the end-user, not realizing that they can ask for a refund? What percentage don't even get noticed?
> I've been putting off digging into AWS for years now
In my opinion people end up in these billing situations because they don't actually "dig in" to AWS. They make their pricing easily accessible, and while it's not always easy to understand, it is relatively easy to test as most costs scale nearly linearly.
> the rational fear of waking up to massive charges.
Stay away from the "wrapper" services. AWS Amplify, or Cloudformation, or any of their Stack type offerings. Use the core services directly yourself. All services have an API. Getting an API key tied to an IAM user is as simple as clicking a button.
Everything else is manageable with reasonable caching and ensuring that your cost model is matched to your revenue model so the services that auto scale cost a nearly fixed percentage of your revenue regardless of current demand. We take seasonal loads without even noticing most years.
Bandwidth is the only real nightmare on AWS, but they offer automatic long term discounts through the console, and slightly better contract discounts through a sales rep. Avoid EC2 for this reason and because internal bandwidth is more expensive from EC2 and favor direct use of Lambda + S3 + CloudFront.
After about 3 months it became pretty easy to predict what combination of services would be the most cost effective to use in the implementation of new user facing functionality.
Pretty ironic that you're actually listing more things why I would not use AWS at all. You mention: "stay away from", "ensure that you", "reasonable caching", "bandwidth is the only real nightmare" are all huge red flags.
>Instead they have some pencil pushers calculating that they can milk thousands here and there from "user mistakes" that can't be easily disputed
User mistakes of this type must be a drop in the bucket for AWS and in my experience they seem more keen to avoid such issues that can cost more in damaged reputation.
AWS is not cheap, and in some cases it's incredibly expensive (egress fees), but tricking their customers into accidentally spending a couple of hundred extra is not part of their playbook.
Par for the course for AWS. I tried following their quickstart Sagemaker guide to run Llama 2 a few months back. And it certainly spins up quick, but next day I realize it's running me $400/day.
I was able to get the charges reversed, but definitely learned not to trust their guides.
Sagemaker is one of the biggest bummers in terms of product and is a clear case of enshittification.
When the product was starting (2017/2018) the whole setup was quite straightforward: Notebook instances, Inferences, REST APIs for servicing. Some EFS on top and clear that the service centered around S3. And of course, closed price without any surprises.
Was a kind Digital Ocean vibe the whole experience, and a Data Scientist with a rudimentary knowledge and curiosity around infrastructure could setup something affordable, predictable, and simple.
Today we have Wrangler, Feature Store, and RStudio, the console for the notebooks has an awful UX, and several services are under the hood moving data around (and billing for that).
needed to send "raw" http requests instead of using their bloated sdk for reasons, and requests failed with "content-type: application/json" header, but succeeded with "content-type: application/x-amz-json-1.0". get out of here with that nonsense.
I feel this way about pretty much every aspect of AWS I have touched in my career. Overly bloated, overly complex or weird home brew implementation for no clear gain.
Didn't you know, Amazon owns JSON? They acquired it this week, please update all your Content-Type headers within 12 months otherwise you will be in violation of their IP holdings.
If they use a non-standard version of JSON (for example, one supporting comments, or one with rules about duplicate keys, or any other rule that's not part of the underspecified JSON spec) they should use a custom content type. Something can be valid JSON but invalid AmazJSON and this is exactly how you would distinguish between the two.
that's honestly a leak of internal details lol. (leaky abstractions)
because internally most apps are using the coral framework, which is kind of old, using this json format as it has a well defined shape for inputs, outputs, and errors.
Every official AWS guide is designed to make you use as many AWS services as possible, which increases the risk of spend. You have to be extremely critical of anything they recommend (GUI defaults, CLI tools, guides, recommended architectures etc).
There's a reason there are very well paid positions in companies to guide colleagues on how to use AWS cost-effectively and with lower risk.
Exactly. And for small scale deployments or tests, the most expensive parts are almost always the ancillary things or the newfangled services they recommend in lieu of something simpler.
Building a business on blank cheques and accidental spends is shady. It's also a large barrier to adoption. The more times devs see reports like, "I tried [random 20-minute tutorial] and woke up to a bill for my life's savings and luckily support waived the fee this one time but next time they're coming for my house", the less they'll want to explore your offerings.
Neither is what I want. I wish there was a provider with clear and documented limits to allow proper capacity planning while at the same time shifting all the availability risk to the customer but taking on the financial risk. I'd be willing to pay a higher fixed price for that, as long as it is not excessive.
Actually there is a third category, those who care. I will grant you it is a rare category but it is there.
One example name: Exoscale[1]
Swiss cloud provider, they offer:
Sure they don't offer all the bells and whisles range of services of the big names, but what they do do, they do well.No I am not an Exoscale shill, and no I don't work for Exoscale. I just know some of their happy customers. :)
[1]https://www.exoscale.com/
Hetzner also for CPU use, and last time I checked the traffic would drop to slower bandwidth if you pass some limit. I think they still charge you, though.
Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.
Hard spend limits might make sense for indie / SME focused cloud vendors though.
Yada yada yada, that's the same old excuse the cloud providers trot out.
Now, forgive me for my clearly Nobel Prize winning levels of intellect when I point out the following...
Number one: You would not have to turn on the hard spend limit if such functionality were to be provided.
Number two: You could enable customers to set up hard limits IN CONJUNCTION WITH alerts and soft limits, i.e. hitting the hard limit would be the last resort. A bit like trains hitting the buffers at a station ... that is preferable to killing people at the end of the platform. The same with hard spend limits, hitting the limit is better than waking up in the morning to a $1m cloud bill.
Great, so they don’t have to use the feature?
That excuse was a great excuse when AWS was an MVP for someone. 20+ years later… there is no excuse.
That feels like a bit of a red herring — if that was their ethos, then you'd _have_ to choose burstable/autoscaling config on every service. If I can configure a service to fall over rather than scale at a hard limit, that points to them understanding different their use cases (prod vs dev) and customer types (start-up vs enterprise).
Additionally, anytime I've worked for an enterprise customer, they've had a master service agreement set-up with AWS professional services rather than entering credit card info, so they could use that as a simple way to offer choices.
"Hazardous" feels like the wrong word here - if your customer decides to enact a spend limit it should not be up to you to decide whether that's good for them or not.
In the olden days if we spotted a customer ringing up a colossal bill, we would tell them. These huge Amazon bills are fast but still multiple days. They can trivially use rolling-projection windows to know when an account is having a massive spike.
They could use this foresight to call the customer, ensure they're informed, give them the choice about how to continue. This isn't atomic rocket surgery.
"Oh but profit" isn't an argument. They are thousands of dollars up before a problem occurs. The only money they lose is money gained through customer accident. Much of it forgiven to customers who cannot afford it. It's not happily spent. They can do better business.
But not all applications are critical, and the company deploying those applications should be able to differentiate between what's critical and what's not. If they're unable to, that's their fault. If there's no option to set hard limits, that's AWS' fault.
(Office Space)
OTOH for example the default quota of 55PB (yes, go check it) for your daily limit of data extraction from GBQ is funny, until you make a costly mistake, or some forked process turns zombie.
This is predatory practice, that I can't set up MONEY limits for cloud services.
This is an interesting point and something I can totally imagine happening! I guess if you have fixed spending limits in a large enough organisation, you lose some of the benefit of cloud infrastructure. Convincing a (metaphorically) remote Finance department to increase your fixed spending limit is probably a tougher task than ordering a load of new hardware!
Tell me you're Shadow IT without telling me you're Shadow IT.
I know legitimizing shadow IT is still the value proposition of AWS to a lot of organizations. But it sucks if that's the reason the rest of us can't get an optional feature.
That's been one of the more interesting inside baseball facts I've learned here.
To make my case: just ponder the opposite: "What would an honest version of AWS do?". They would address the concerns publicly, document their progress towards fixing the issue, and even try to determine who was overcharged due to their faulty code, and offer them some compensation.
"We're too big to fix our own code" is, sadly, taken from the MS playbook (IIRC, something like that was made public after the breach of MS manager mailboxes after the whole Azure breach fiasco that was discovered by, IIRC, the DOJ that paid to have access to logs).
e.g. for me. I never dared to get my foot wet with AWS, despite interest. Better safe with a cheap, flat-rate VPS than sorry.
I also have an account with L̵i̵n̵u̵x̵A̵c̵a̵d̵e̵m̵y̵ A̵C̵l̵o̵u̵d̵G̵u̵r̵u̵ PluralSight: and while the courses are very variable (and mostly exam cramming focused) it has their Cloud Playground as a super nice feature.
I get four hours to play around with setting things up and then it will all get automatically torn down for me. There's no cloud bill, just my subscription (I think that's about $400pa at the moment - can't check right now as annoyingly their stuff is blocked by our corporate network!) It has a few limitations, but none that have been problems for me with exploring stuff so far.
Asking Amazon to do something makes little sense. Create laws that force Amazon, and all the rest, to respect their users money. By default, corporations will do what makes them money, not what is ethical or good for the economy.
Jeff Bezos killed the endorsement because he wanted Trump to win. Trump will return the favor.
AWS prices are ridiculous. I pay OVH $18/mo for a 4-core, 32 GB RAM, 1 TB SSD dedicated server. The cheapest on AWS would be r6g.xlarge, which costs $145/mo. Almost 10x.
Yes, AWS hardware is usually better, but they give me 4 "vCPUs". OVH gives me 4 "real" CPU cores. There's a LOT of difference. E even if my processor is worse than AWS', I still prefer 4 real CPUs than virtual ones, which are overbooked by AWS and rarely give me 100% of their power.
OVH gives me 300 Mbit, while r6g.xlarge gives "up to" 10 Gbit. But still, 10x? 300 Mbit gives me ~37 mb/s. I use a CDN for large stuff: HTML, images, JS, anyways...
There are certainly cases where AWS is the go-to option, but I think it's a small minority where it actually makes sense.
You have reached your Configured Maximum Monthly Spend Limit.
As per your settings we have removed all objects from S3, All RDS Databases, All Route53 Domains, all ESB volumes, all elastic IPs, All EC2 instances and all Snapshots.
Please update your spend limit before you recreate the above.
Yours, AWS
It wouldn’t solve the problem for usage-based billing, but it would have solved the problem here.
Having one AWS account where you actually run stuff, and one that follows the rule of "if it can't be paved and recreated from github, don't put it there" is exactly how a lot of people do it anyway.
There's always a dunning period and multiple alerts
I would bet that the reason they don't implement it is not that they're being "shady" but because they don't care about the hobbyists and personal projects and implementing hard spending limits would be a huge, complicated feature to implement. And even if they did put in the huge effort to do it, individuals would still manage to not use it and the steady trickle of viral "I accidentally spent X thousands bucks on AWS" stories would continue as usual.
Honestly? All the better. There are obviously use cases where AWS is the right tool for the job but it's extremely rare. It's coasting on hype and somehow attaining "no one was ever fired for buying IBM" status.
It’s just not the kind of thing you’re going to see in a blog post.
Deleted Comment
GCP requires this when you set up a new project. GCP deserves as much credit as AWS does scorn.
AWS's growth doesn't come from courting small random devs working on side projects.
Dead Comment
After about two days of struggling and a $100 bill, I said fuck it, deleted my account and deployed to DigitalOcean’s app platform instead, where it also failed to deploy (the error was with my app), but I had logs, every time. I fixed it in and had it running in under ten minutes, total bill was a few cents.
I swore that day that I would never again use AWS for anything when given a choice, and would never recommend it.
Charing per deployment sounds crazy though.
I think technically I was just being charged for the container host machine, but while each individual deploy only lasted a minute or so, I was being charged the minimum each time. And each new deploy started a new host machine. Something like that anyway, it was a few years ago, so I don't remember the specifics.
So I can understand why, but it doesn't change that if their logging hadn't been so flaky, I should have been able to fix the issue in minutes with minimal cost, like I did on Digital Ocean. Besides, the $100 they charged me doesn't include the much more expensive two days I wasted on it.
There isn't a boxed product like Bigquery, but the pieces are all there - DynamoDB, Athena, Quicksight...
- Choose the lowest cost resource (it's a tutorial!)
- Cleanup resources when the `delete` subscript is run
I don't think it's fair to expect developers to do paranoid sweeps of their entire AWS account looking for rogue resources after running something like this.
If a startup had this behavior would you shrug and say "this happens, you just have to be paranoid"? Why is AWS held to a different standard by some?
That's the thing that annoys me the most about AWS. There's no easy way to find out all the resources I'm currently paying for (or if there's a way, I couldn't find it).
Without an easy to understand overview, it feels like I don't have full control of my own account.
Cost Explorer, in the management account if you’ve got Organization set up.
closest i found in aws was something like tag manager?
BTW I'm a supporter of spending caps, not saying this should be the only way.
Amazon earns an easy $1000, it is not a bug but a feature. Even if they think that it is a bug it is going to be pretty low compared to anything else that hits THEIR bottom line.
Agree, it isn't fair. I think it's sensible though. When creating anything on AWS I always behave like AWS is an hostile financial institution gone rogue.
Instead they have some pencil pushers calculating that they can milk thousands here and there from "user mistakes" that can't be easily disputed, if at all. I'm sure I'm not the only person who's been deterred from their environment due to the rational fear of waking up to massive charges.
Perhaps that does not excuse the behaviour but AWS reversed a $600 charge I incurred using AWS Textract where the charges were completely legitimate and I was working for a billion dollar enterprise.
I once ran up a bill of $60 accidentally, didn't get a refund. I've had three friends with bills, one got a refund.
It might depend on who you know, if you look like someone who is likely to spend more money in future, how stupid your mistake was, I don't know.
...when asked to. But what percentage of mistakes like this end up just being "eaten" by the end-user, not realizing that they can ask for a refund? What percentage don't even get noticed?
In my opinion people end up in these billing situations because they don't actually "dig in" to AWS. They make their pricing easily accessible, and while it's not always easy to understand, it is relatively easy to test as most costs scale nearly linearly.
> the rational fear of waking up to massive charges.
Stay away from the "wrapper" services. AWS Amplify, or Cloudformation, or any of their Stack type offerings. Use the core services directly yourself. All services have an API. Getting an API key tied to an IAM user is as simple as clicking a button.
Everything else is manageable with reasonable caching and ensuring that your cost model is matched to your revenue model so the services that auto scale cost a nearly fixed percentage of your revenue regardless of current demand. We take seasonal loads without even noticing most years.
Bandwidth is the only real nightmare on AWS, but they offer automatic long term discounts through the console, and slightly better contract discounts through a sales rep. Avoid EC2 for this reason and because internal bandwidth is more expensive from EC2 and favor direct use of Lambda + S3 + CloudFront.
After about 3 months it became pretty easy to predict what combination of services would be the most cost effective to use in the implementation of new user facing functionality.
It might provide that, but I’ve never tried it myself, so I could be wrong.
User mistakes of this type must be a drop in the bucket for AWS and in my experience they seem more keen to avoid such issues that can cost more in damaged reputation.
AWS is not cheap, and in some cases it's incredibly expensive (egress fees), but tricking their customers into accidentally spending a couple of hundred extra is not part of their playbook.
I was able to get the charges reversed, but definitely learned not to trust their guides.
Status: Won't fix (working as intended)
Notes: Got the author promoted to SDE III for great impact and revenue boost
When the product was starting (2017/2018) the whole setup was quite straightforward: Notebook instances, Inferences, REST APIs for servicing. Some EFS on top and clear that the service centered around S3. And of course, closed price without any surprises.
Was a kind Digital Ocean vibe the whole experience, and a Data Scientist with a rudimentary knowledge and curiosity around infrastructure could setup something affordable, predictable, and simple.
Today we have Wrangler, Feature Store, and RStudio, the console for the notebooks has an awful UX, and several services are under the hood moving data around (and billing for that).
needed to send "raw" http requests instead of using their bloated sdk for reasons, and requests failed with "content-type: application/json" header, but succeeded with "content-type: application/x-amz-json-1.0". get out of here with that nonsense.
because internally most apps are using the coral framework, which is kind of old, using this json format as it has a well defined shape for inputs, outputs, and errors.
There's a reason there are very well paid positions in companies to guide colleagues on how to use AWS cost-effectively and with lower risk.
Deleted Comment