1. you request client credentials, which is normal.
2. construct request URL, normal.
3. add headers, eh, normal.
4. signature... fuck.
First you need to convert the URL you have from step 2, mash it with headers from step 3, add header keys to signed headers, then sum256 hash the payload and hex encode it.
Then you create a sign, add algorithm, request date time that is formatted with ISO8601 but all special characters stripped apart, add credential scopes, hash the canonical request you created at the first step.
Then, you calculate this abomination: HMAC(HMAC(HMAC(HMAC("AWS4" + kSecret,"20150830"),"us-east-1"),"iam"),"aws4_request")
after that you calculate this: signature = HexEncode(HMAC(derived signing key, string to sign))
after that you create an authorization header and add signature to it: Authorization: AWS4-HMAC-SHA256 Credential=AKIAIHV6HIXXXXXXX/20201022/us-east-1/execute-api/aws4_request, SignedHeaders=host;user-agent;x-amz-access-token;x-amz-date, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924aEXAMPLE
...I mean what the fuck? I can understand why people choose Azure over AWS for the sake of freaking simplicity just by looking at this sign and request process. It feels overly-complicated. Does anyone feels the same while working with this abomination?
What you should be able to do:
Click click done. Upload files to S3. Point CNAME at AWS.
What you have to do:
Create an S3 bucket and stick the files in it. Create a zone in Route 53 and import your old zone file. Change your nameservers at the registrar. Wait a bit. Go to ACM in the correct region and create a cert. Tell it to add the DNS entries to R53. Wait a bit. Create a CloudFront distribution making sure that you get all the options right which is quite difficult. While that's deploying, copy the IAM policy it generates back to the S3 bucket. Jump back to R53 and add a new A record pointing at the cloudfront distribution alias. Cross fingers and start praying that the whole stack works. If it doesn't spend several hours working out which thing you forgot to click.
What devops culture would have you do:
Play with CloudFormation for 2 days writing oodles of mind numbing YAML and realise that you have to create the ACM cert and CF distribution in a different region to your bucket but that's impossible. Try and work around this with recommended StackSets but realise they are so horrible that they are unusable. Spend an hour googling. Download terraform because everyone is gushing over it. Spend several hours learning the above and writing it in HCL constantly tearing down and creating resources until the whole thing limps along. Eventually realise you need to share this is someone and the thing is stateful so pay Hashicorp for TF Cloud. After the PO is approved with takes 2 weeks, check it in to git, cheer loudly, hand it over to a colleague who has a different version of terraform and then discover about tfenv and terraform upgrades.
I only do this because I'm paid by the hour. I'm not even a cloud or ops guy. I'm an electronics engineer who needs to eat. I eat well due to this mess but I know it's all so so so wrong.
The policies, cloudformation, r53 config, cloudfront all have their purpose and a lot of flexibility. But don't pay the complexity price if someone else can wrap it for you in a nice saas.
[1]: I'm probably going to move that to FreeBSD/OpenBSD with OpenBSD's relayd+httpd+acme-client at some point, but the general idea stays the same.
I am using a simple S3 bucket to serve image files.
We’re getting close to busting the free tier web traffic limit.
I can’t figure out where, or how, to enter my boss’s credit card so we can start paying for the service.
I’m not kidding. I’ve looked everywhere. Several times. I’ve probably spent two hours on this.
Glad you included the "correct region" qualifier. The number of times I've mistakenly created the certificate in my local region, rather than the one designated for CloudFront...
Really these days I want to solve the problem and go home, not remember where the poo I stepped in last time I had to solve it was.
> Play with CloudFormation for 2 days writing oodles of mind numbing YAML and realise that you have to create the ACM cert and CF distribution in a different region to your bucket but that's impossible.
The discrepancies in cross-region support specifically with ACM drives me nuts.
I noped away from Terraform back in 2019. At the time, the state management component was too immature, and it was a little.. lose with credential security.
You may like Ansible far more if you’d like something a bit more declarative, readable, and with “batteries included.” I have a close friend who is an EE and he finds tools like Ansible much more intuitive and enjoyable than Terraform and CloudFormation.
Every tool has its master.
https://github.com/lifebeyondfife/simple-static-website
Ok I missed: point DNS to Hostgator name servers. Click the button to install lets encrypt.
That's the point - I do this too. I like the fact that AWS jobs pay well. I'm fair and I always point out to my clients that AWS is expensive and it will cost them more in the long run, and the vendor lock-in is considerable - but they don't care as "everybody is doing it." Well, that's fine with me, I also like to eat well.
Deleted Comment
Deploying it? `aws cloudformation package -> aws cloudformation deploy`. That's it. No need for third party tooling or other madness.
I think the lack of understanding the fundamentals leads people to picking up tools that abstract the fundamentals away (but then leads to more complex solutions).
I created account after account at first and then had hell deleting them entirely. was still easier than figuring out was using a vpc that had nothing visible attached to it
Doing that in Infra as Code shouldn't take two days IMO.
Every attempt keeps trying to solve the same problems with a new abstraction but the problem is the underlying abstraction not the tools.
Actually, security is not supposed to be complicated. "Complicated" is the anti-thesis of "secure".
It did not tree shake cleanly with my build system and I eventually ended up just yanking AWS from the stack entirely.
https://curl.se/docs/manpage.html#--aws-sigv4
Deleted Comment
And if you’re building your own SDK (why?), well there is a lot more complexity down the line once you get past authentication.
There are more languages than C++, Go, Java, JavaScript, Kotlin, .NET, Node.js, PHP, Python, Ruby, Rust & Swift.
But if you’re building a feature to interact with AWS then ignoring the availability of SDKs is stupid. If you want to write your service in brainfuck then go for it, but don’t blame AWS for that decision.
Deleted Comment
Also: All of this is handled in the SDKs. Anyone implementing this themselves either isn't using the right libraries or has a very special use case.
https://cloud.google.com/vertex-ai-workbench
That doesn't make any sense to me. What problems were you running into? Did you need to configure a bunch of super-non-standard stuff or something?
With people who use AWS SDK its all abstracted out. But there is time I just want to send a damn `curl` to download a S3 file and yes, doing the dance in bash isn't easy.
There is time I wrote a Lua plugin for openresty to fetch s3 and. I have to trial and error with lot of debugging. The ordering. the timestamp format...all of that...
You can use curl's `--aws-sigv4` option, then: https://curl.se/docs/manpage.html#--aws-sigv4
The reality is is that what AWS is doing here is all fully justified and normal, and that’s why SDKs exist. Like many developers, you’re being a bit clueless about security here. Instead of complaining, I recommend educating yourself - look into capability security and signed tokens in general, and you’ll understand what AWS is doing and it’ll make you a better developer.
Any decent api should have some kind of signatures which work roughly like:
And the point is to prove that you know the key without sending it to AWS / your logs, and to prove that the request came from someone with the key ;and not eg someone replaying a message from logs which was either old or modified).Most of the mess seems to be due to AWS wanting to limit the power of the secrets that they store, presumably in case they’re compromised. You can get a reasonably good idea of their architecture by looking at how you construct the derived secret – each inner layer will be more tightly controlled than the one around it.