I agree, AWS (and Amazon Alexa + Ring) seem to be skirting and/or choosing active silence on that "Lambda" outage that took out us-east-1. Alexa had at the very least two full scale outages this year, and Ring was successfully hacked as well.
Amazon's response: "looks like a Lambda... more later" then nothing, nothing, and "it must be a 3rd party skill."
AWS health page: GREEN.
AWS Post-event summaries: non-existent after 2021.
This is quite troubling when you consider what AWS holds in its coffers, along with the number of customers AWS currently has. No transparency, not even strategic transparency, is a definite red flag.
Operational issue - Amazon CloudFront (Global)
Service
Amazon CloudFront
Severity
Informational
RSS
Elevated Error Rates
Jul 18 10:26 AM PDT Between 9:37 AM and 10:13 AM PDT, we experienced elevated error rates for request serviced by the CloudFront Origin Shield and Regional Edge Cache in the US-EAST-1 region. The issue has been resolved and service is operating normally.
FYI, it takes some pretty high-level approval at AWS for someone to make a change on their "status" page (signaling SLA breaches and money stuff). Not an actual live status page.
> The distribution does not match the certificate for which the HTTPS connection was established with. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
Having worked at AWS: us-east-1 is probably the easiest region to deploy in (it's old, it's popular, every AWS service is available there), but might just be the worst region to rely on (it's old, it's popular, and every AWS service is available there—but only most of the time).
I vividly remember when AWS Fault Injection Simulator first launched, posting a meme in the #aws-memes slack channel posting a drake meme with the first panel being "Using AWS Fault Injection Simulator", and the second "Deploying in us-east-1".
I will never understand why anyone in our infrastructure team thought for a moment that putting our backup datacenter in us-east-1. And why nobody else tried to get him fired.
That's like buying a backup generator from a guy that wants to meet you in the Walmart parking lot.
I agree, AWS (and Amazon Alexa + Ring) seem to be skirting and/or choosing active silence on that "Lambda" outage that took out us-east-1. Alexa had at the very least two full scale outages this year, and Ring was successfully hacked as well.
Amazon's response: "looks like a Lambda... more later" then nothing, nothing, and "it must be a 3rd party skill."
AWS health page: GREEN.
AWS Post-event summaries: non-existent after 2021.
This is quite troubling when you consider what AWS holds in its coffers, along with the number of customers AWS currently has. No transparency, not even strategic transparency, is a definite red flag.
Operational issue - Amazon CloudFront (Global) Service Amazon CloudFront Severity Informational RSS Elevated Error Rates Jul 18 10:26 AM PDT Between 9:37 AM and 10:13 AM PDT, we experienced elevated error rates for request serviced by the CloudFront Origin Shield and Regional Edge Cache in the US-EAST-1 region. The issue has been resolved and service is operating normally.
> 421 ERROR
> The request could not be satisfied.
> The distribution does not match the certificate for which the HTTPS connection was established with. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
I vividly remember when AWS Fault Injection Simulator first launched, posting a meme in the #aws-memes slack channel posting a drake meme with the first panel being "Using AWS Fault Injection Simulator", and the second "Deploying in us-east-1".
That's like buying a backup generator from a guy that wants to meet you in the Walmart parking lot.