Readit News logoReadit News
Posted by u/damacaner 3 years ago
Does anyone else finds AWS and other Amazon services overly complicated?
I think AWS and its signature system is making things more complicated than it should be, like this is a normal signing in process to API:

1. you request client credentials, which is normal.

2. construct request URL, normal.

3. add headers, eh, normal.

4. signature... fuck.

First you need to convert the URL you have from step 2, mash it with headers from step 3, add header keys to signed headers, then sum256 hash the payload and hex encode it.

Then you create a sign, add algorithm, request date time that is formatted with ISO8601 but all special characters stripped apart, add credential scopes, hash the canonical request you created at the first step.

Then, you calculate this abomination: HMAC(HMAC(HMAC(HMAC("AWS4" + kSecret,"20150830"),"us-east-1"),"iam"),"aws4_request")

after that you calculate this: signature = HexEncode(HMAC(derived signing key, string to sign))

after that you create an authorization header and add signature to it: Authorization: AWS4-HMAC-SHA256 Credential=AKIAIHV6HIXXXXXXX/20201022/us-east-1/execute-api/aws4_request, SignedHeaders=host;user-agent;x-amz-access-token;x-amz-date, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924aEXAMPLE

...I mean what the fuck? I can understand why people choose Azure over AWS for the sake of freaking simplicity just by looking at this sign and request process. It feels overly-complicated. Does anyone feels the same while working with this abomination?

gw98 · 3 years ago
Ah that's nothing. Simple problem from a high level: static web site on apex domain.

What you should be able to do:

Click click done. Upload files to S3. Point CNAME at AWS.

What you have to do:

Create an S3 bucket and stick the files in it. Create a zone in Route 53 and import your old zone file. Change your nameservers at the registrar. Wait a bit. Go to ACM in the correct region and create a cert. Tell it to add the DNS entries to R53. Wait a bit. Create a CloudFront distribution making sure that you get all the options right which is quite difficult. While that's deploying, copy the IAM policy it generates back to the S3 bucket. Jump back to R53 and add a new A record pointing at the cloudfront distribution alias. Cross fingers and start praying that the whole stack works. If it doesn't spend several hours working out which thing you forgot to click.

What devops culture would have you do:

Play with CloudFormation for 2 days writing oodles of mind numbing YAML and realise that you have to create the ACM cert and CF distribution in a different region to your bucket but that's impossible. Try and work around this with recommended StackSets but realise they are so horrible that they are unusable. Spend an hour googling. Download terraform because everyone is gushing over it. Spend several hours learning the above and writing it in HCL constantly tearing down and creating resources until the whole thing limps along. Eventually realise you need to share this is someone and the thing is stateful so pay Hashicorp for TF Cloud. After the PO is approved with takes 2 weeks, check it in to git, cheer loudly, hand it over to a colleague who has a different version of terraform and then discover about tfenv and terraform upgrades.

I only do this because I'm paid by the hour. I'm not even a cloud or ops guy. I'm an electronics engineer who needs to eat. I eat well due to this mess but I know it's all so so so wrong.

viraptor · 3 years ago
AWS is basically a "construct your own custom infrastructure" service and does that well. But if you're not going past a static page, it's not the best service - just go with vercel or netlify or someone similar instead.

The policies, cloudformation, r53 config, cloudfront all have their purpose and a lot of flexibility. But don't pay the complexity price if someone else can wrap it for you in a nice saas.

yakubin · 3 years ago
100% this. A couple times I've considered moving my personal website to AWS, but each time I got lost in the documentation trying to keep all the different moving parts in my head. At the end of the day managing a single VPS running debian+nginx+certbot[1] with a Gandi domain is orders of magnitude simpler.

[1]: I'm probably going to move that to FreeBSD/OpenBSD with OpenBSD's relayd+httpd+acme-client at some point, but the general idea stays the same.

devonkim · 3 years ago
AWS / systems professional here. I’d use something like Netlify for a personal site instead of the complexities and knobs available when using AWS S3 static site options. AWS bare is the equivalent to me of trying to use Linux From Scratch in 2012 for running a bash script in a cronjob when there’s distributions like Ubuntu out there meant to do a lot of the boring and mostly inconsequential stuff for everyone besides the highly regulated. It’s great for educational reasons but learning and productivity are usually diametrically opposed in use cases.
stblack · 3 years ago
I envy you.

I am using a simple S3 bucket to serve image files.

We’re getting close to busting the free tier web traffic limit.

I can’t figure out where, or how, to enter my boss’s credit card so we can start paying for the service.

I’m not kidding. I’ve looked everywhere. Several times. I’ve probably spent two hours on this.

vitorfhc · 3 years ago
Hey, is the AWS account you are using inside an organization? If it's then all the billing information should go into the management account. Hope this is the issue and I could help you.
pooper · 3 years ago
How can you even get started with AWS without entering payment details? I would love to use AWS for personal projects if I could guarantee it will just stop service outright I stead of charging me.
flamebreath447 · 3 years ago
Ah I see you haven’t spent countless hours looking at the management console cost breakdown because management says “our cloud spending is too high”, to then read the highest cost is your “cost savings plan”.
kruuuder · 3 years ago
I also used an S3 bucket for static files. The setup was way more complicated than I expected, and figuring out how to get usage statistics and understanding the cost structure even more so. In the end I moved from S3 to a CDN, super simple to set up and costs went down from 200 USD per month to 5 USD per month. I hope I'll never have to deal with AWS again.
e40 · 3 years ago
Sounds like you don’t have billing credentials for your iams user.
YPPH · 3 years ago
>Go to ACM in the correct region and create a cert

Glad you included the "correct region" qualifier. The number of times I've mistakenly created the certificate in my local region, rather than the one designated for CloudFront...

gw98 · 3 years ago
Yep.

Really these days I want to solve the problem and go home, not remember where the poo I stepped in last time I had to solve it was.

TheNewsIsHere · 3 years ago
I recently created a static site at a zone apex without Route53. Everything else just works, well, as you illustrate.

> Play with CloudFormation for 2 days writing oodles of mind numbing YAML and realise that you have to create the ACM cert and CF distribution in a different region to your bucket but that's impossible.

The discrepancies in cross-region support specifically with ACM drives me nuts.

I noped away from Terraform back in 2019. At the time, the state management component was too immature, and it was a little.. lose with credential security.

You may like Ansible far more if you’d like something a bit more declarative, readable, and with “batteries included.” I have a close friend who is an EE and he finds tools like Ansible much more intuitive and enjoyable than Terraform and CloudFormation.

Every tool has its master.

lifebeyondfife · 3 years ago
I created a CloudFormation script specifically to do this (including CloudFront for CDN). Yes, there's reasonable complexity, but it's a trade-off of security and configuration. As others noted, the documentation is good, and this took me a weekend to write including learning CloudFormation from scratch.

https://github.com/lifebeyondfife/simple-static-website

quickthrower2 · 3 years ago
I prefer log into Hostgator, click cPanel, click File Manager, drag files across. Any day!

Ok I missed: point DNS to Hostgator name servers. Click the button to install lets encrypt.

gw98 · 3 years ago
You know it's bad when cPanel is a less painful solution :)
spaniard89277 · 3 years ago
Yep, I just opload to my FTP from my file browser (thunar), not even cpanel. I use picocms so can't be easier to manage.
hdjjhhvvhga · 3 years ago
> I eat well due to this mess but I know it's all so so so wrong.

That's the point - I do this too. I like the fact that AWS jobs pay well. I'm fair and I always point out to my clients that AWS is expensive and it will cost them more in the long run, and the vendor lock-in is considerable - but they don't care as "everybody is doing it." Well, that's fine with me, I also like to eat well.

Deleted Comment

philliphaydon · 3 years ago
I learned cloudformation in a day. Everyone on HN said use terraforms. Tried to learn terraforms and it made things complicated. Went back to CF.
laputan_machine · 3 years ago
I'm currently in the middle of converting my team to straight CF instead of CDK/Terraform/Serverless. To me it's needlessly complicated, CF is simple, configurable, importantly it's _declarative_, which is the 'correct' way to approach infra (imho, of course).

Deploying it? `aws cloudformation package -> aws cloudformation deploy`. That's it. No need for third party tooling or other madness.

I think the lack of understanding the fundamentals leads people to picking up tools that abstract the fundamentals away (but then leads to more complex solutions).

fhfuewidxjhe · 3 years ago
best part about debuging you HCL/CL is when you cannot delete some resource and it won't tell you why besides "is in use".

I created account after account at first and then had hell deleting them entirely. was still easier than figuring out was using a vpc that had nothing visible attached to it

zoover2020 · 3 years ago
Are you aware of the AWS CDK?

Doing that in Infra as Code shouldn't take two days IMO.

gw98 · 3 years ago
Yes. CDK is just another way of expressing the same things as terraform and cloudfront but using a framework which is opaque. Plus it's slow as fuck, buggy and difficult to debug when it goes wrong.

Every attempt keeps trying to solve the same problems with a new abstraction but the problem is the underlying abstraction not the tools.

brunooliv · 3 years ago
If you don't use the SDK, how can you judge anything as being "overly complicated"? I mean, I don't know about you, but, last time I checked, signatures, certificates, security and all that stuff IS SUPPOSED to be super complicated because it's a subject with a very high inherent complexity in and of itself. The SDK exists and is well designed to precisely shield you from said complexity. If you deliberately choose not to use it or can't for some reasons then yes... The complexity will be laid bare
naasking · 3 years ago
> I mean, I don't know about you, but, last time I checked, signatures, certificates, security and all that stuff IS SUPPOSED to be super complicated because it's a subject with a very high inherent complexity in and of itself.

Actually, security is not supposed to be complicated. "Complicated" is the anti-thesis of "secure".

Gigachad · 3 years ago
HN users routinely try to do things the obtuse way and then complain when its hard. Throw in something about the SDK being spyware or not following the unix philosophy.
BackBlast · 3 years ago
Have you ever tried to import the AWS SDK into a front end client? It's huge. Last time I tried it, it added multi MB of JS to my SPA, so I could do a relatively "simple" call using it. Yuck.

It did not tree shake cleanly with my build system and I eventually ended up just yanking AWS from the stack entirely.

orf · 3 years ago
Did you import the entire SDK, or only the SDK for the service you actually needed?
asah · 3 years ago
Curl is my preferred SDK.
numbsafari · 3 years ago

Deleted Comment

orf · 3 years ago
Your complaint isn’t about AWS, it’s about the authentication scheme. I find it to be pretty neat, especially when it’s flexible enough to create signed URLs for any method and send them to third parties. We use that as a basis for service-to-service auth. It’s cool and flexible. But overall this complaint seems pretty shallow. You’d just use the client SDKs they publish, or if you want to really go off the beaten path you could just use the signing methods from those SDKs with your own request/response calls.

And if you’re building your own SDK (why?), well there is a lot more complexity down the line once you get past authentication.

omnibrain · 3 years ago
> And if you’re building your own SDK (why?)

There are more languages than C++, Go, Java, JavaScript, Kotlin, .NET, Node.js, PHP, Python, Ruby, Rust & Swift.

orf · 3 years ago
Of course there are. And there are also unofficial SDKs for many other languages.

But if you’re building a feature to interact with AWS then ignoring the availability of SDKs is stupid. If you want to write your service in brainfuck then go for it, but don’t blame AWS for that decision.

Deleted Comment

mcqueenjordan · 3 years ago
You HMAC the region so that in case a region is compromised, other regions aren't as well. You HMAC the service so that in case a service is compromised, other services aren't as well, you HMAC the timestamp for obvious reasons (time bound the signature), the outer "aws4_request" HMAC, I'm sure there's a good reason for. Maybe just versioning? Not sure.

Also: All of this is handled in the SDKs. Anyone implementing this themselves either isn't using the right libraries or has a very special use case.

dividuum · 3 years ago
Sounds like OP is having a bit of a Chesterton's Fence moment. It's clearly complicated, but you describe why this is probably implemented the way it is. The scheme reminds me of Macaroons: https://blog.gtank.cc/macaroons-reading-list/
naasking · 3 years ago
I'm sure there are "reasons" why the HMACs are layered, the question is, does using HMACs actually add useful security properties here, or is this layered HMAC really just a way to generate a cryptographically secure id? If the latter, then can't you just generate that directly rather than needing to gather all of the right information and HMAC it in just the right way?
mcqueenjordan · 3 years ago
Sorry, how do you propose to transmit a signature over the wire such that if it were compromised, the blast radius is limited to only the called service within the called region within a finite time window?
dastbe · 3 years ago
when a request hits the server authenticating you, it has to recreate the signature. aws doesn’t want to provide those services your raw credential because that makes any aws host a very juicy target. instead, they provide the partially evaluated signature including region and service and then they continue the process. this means that if you compromise an ec2 host, the only credentials you get are usable against ec2. the idea is that if you were able to achieve that, you likely don’t even need those credentials to do worse things to ec2.
carabiner · 3 years ago
I did AWS training at the Amazon offices in Seattle for data science. I was blown away by the configuration... I have recompiled linux kernels and configured iptables as a teenager, and this was an entire galaxy of more complexity. It took us 6 hours to the point where some of us had a Jupyter Notebook running. Many people didn't make it though.
rippercushions · 3 years ago
Were you doing things the hard way on purpose? It's literally one click to spin up a Jupyter notebook on GCP, arguably less if you use Colab (instant access to shared instances), and I'm sure AWS has a similar service.

https://cloud.google.com/vertex-ai-workbench

carabiner · 3 years ago
So, if I remember right, the endgame was to have an automated ML model running as a service on an AWS instance with SageMaker. Part of that involved having a Jupyter setup installed. Don't remember much past that, though.
crazygringo · 3 years ago
Why?

That doesn't make any sense to me. What problems were you running into? Did you need to configure a bunch of super-non-standard stuff or something?

goodpoint · 3 years ago
Artificial complexity to justify lock-in and artificial salaries.
wombatpm · 3 years ago
Oracle DBA’s would like to have a word with you in the back alley
themoonisachees · 3 years ago
The microsoft model of "if sysadmins are spending much more time debugging windows, then they put windows first on their CV and gradually management forgets linux exists"
Traubenfuchs · 3 years ago
Obviously! Just like devops as a whole. Why do we have this current mess instead of just pushing stuff to heroku?
kureikain · 3 years ago
I though its just me who find it overly complicated.

With people who use AWS SDK its all abstracted out. But there is time I just want to send a damn `curl` to download a S3 file and yes, doing the dance in bash isn't easy.

There is time I wrote a Lua plugin for openresty to fetch s3 and. I have to trial and error with lot of debugging. The ordering. the timestamp format...all of that...

darrenf · 3 years ago
> But there is time I just want to send a damn `curl` to download a S3 file

You can use curl's `--aws-sigv4` option, then: https://curl.se/docs/manpage.html#--aws-sigv4

gw98 · 3 years ago
It's even worse. We use presigned URLs (from the SDK) and S3 for file uploads. That's one fresh hell you don't want to get involved with. One of those seemed like a good idea at the time things...
alserio · 3 years ago
May I ask you why? I use and have used both PUT presigned urls and POST signed policies for uploading file to S3 without too many problems, but your remark worries me a bit. Am I missing something?
antonvs · 3 years ago
What does Azure’s security system for object storage look like? Because if it’s not doing something like this, then I have questions about their security.

The reality is is that what AWS is doing here is all fully justified and normal, and that’s why SDKs exist. Like many developers, you’re being a bit clueless about security here. Instead of complaining, I recommend educating yourself - look into capability security and signed tokens in general, and you’ll understand what AWS is doing and it’ll make you a better developer.

dan-robertson · 3 years ago
Whether or not AWS is overly complicated, I don’t think this is a good example.

Any decent api should have some kind of signatures which work roughly like:

  signature = hmac_sha256(secret, url+payload+time+etc)
  set_header(…, auth_header(signature))
And the point is to prove that you know the key without sending it to AWS / your logs, and to prove that the request came from someone with the key ;and not eg someone replaying a message from logs which was either old or modified).

Most of the mess seems to be due to AWS wanting to limit the power of the secrets that they store, presumably in case they’re compromised. You can get a reasonably good idea of their architecture by looking at how you construct the derived secret – each inner layer will be more tightly controlled than the one around it.