Never was.
I flew every other week prior to covid and haven't once been through the scanners. For the first ~6 years, I opted out and got pat down over and over again.
Then I realized I could even skip that.
Now at the checkpoint, I stand at the metal detector. When they wave me to the scanner, I say "I can't raise my arms over my head." They wave me through the metal detector, swab my hands, and I'm done. I usually make it through before my bags.
Sometimes, a TSA moron asks "why not?" and I simply say "are you asking me to share my personal healthcare information out loud in front of a bunch of strangers? Are you a medical professional?" and they back down.
Other times, they've asked "can you raise them at least this high?" and kind of motion. I ask "are you asking me to potentially injure myself for your curiosity? are you going to pay for any injuries or pain I suffer?"
The TSA was NEVER about security. It was designed as a jobs program and make it look like we were doing something for security.
I was then subjected to full pat down and a shoe chemical test as a cherry on top.
Might need to try convincing them next time to let me do the metal detector instead.
What's the point of this higher fidelity scanner if it can't tell the difference between a fly and a restricted object?
Pros and cons to each but I did like that because it was much more difficult to fat finger or absentmindedly use the wrong parameter.
I also found a loophole with the Amazon.com return grocery credit. The systems are separate for the $10 off $40 coupon and you just scan a QR code in the store to get it. It turns out you can just take a photo of their QR code and reuse it over and over again.
To be fair I've noticed this in multiple supermarket chains the last few years. Although they aren't usually employees, they are instacart runners or whatever.
I go fairly often to a Sprouts grocery store and there are times I need to avoid multiple people clearly doing an Instacart run with 2+ carts full of items.
Shelves are often emptier than they used to be also at these times.
Sadly even once he got the subpoena and other paperwork to track down the criminals through Facebook (they had listed my wheels two weeks later on Marketplace) he couldn't find them since they were using VPNs.
Honestly one of the better things youtube has pitched to me, the quality/relevance of the rest of its recommendations have been nose diving over the last year (or so it feels).
then the cycle starts again. sometimes youtube brings the content back and sometimes i really need to hunt for it.
it's almost like they base interests into like a top 3 or so list and if the third favorite one cycles out a lot (however they deem it is being cycled out) they'll stop recommending or otherwise showing it to me.
They're not perfect (nothing is), but they're actually pretty good. Every task has to be completable within a sprint. If it's not, you break it down until you have a part that you expect is. Everyone has to unanimously agree on how many points a particular story (task) is worth. The process of coming to unanimous agreement is the difficult part, and where the real value lies. Someone says "3 points", and someone points out they haven't thought about how it will require X, Y, and Z. Someone else says "40 points" and they're asked to explain and it turns out they misunderstood the feature entirely. After somewhere from 2 to 20 minutes, everyone has tried to think about all the gotchas and all the ways it might be done more easily, and you come up with an estimate. History tells you how many points you usually deliver per sprint, and after a few months the team usually gets pretty accurate to within +/- 10% or so, since underestimation on one story gets balanced by overestimation on another.
It's not magic. It prevents you from estimating things longer than a sprint, because it assumes that's impossible. But it does ensure that you're constantly delivering value at a steady pace, and that you revisit the cost/benefit tradeoff of each new piece of work at every sprint, so you're not blindsided by everything being 10x or 20x slower than expected after 3 or 6 months.
For instance someone says a ticket is two days' work. For half the team that could be four days because people are new to the team or haven't touched that codebase, etc. But because the person who knows the ticket and context well enough says 2, people tend to go with what they say.
We end up having less of those discussions you describe to come to an agreement that works more on an average length of time the ticket should take to complete.
And then the org makes up new rules that SWEs should be turning around PRs in less than 24 hours and if reviews/iterating on those reviews takes longer than two days then our metrics look bad and there could be consequences.
But that's another story.
A model or new model version X is released, everyone is really impressed.
3 months later, "Did they nerf X?"
It's been this way since the original chatGPT release.
The answer is typically no, it's just your expectations have risen. What was previously mind-blowing improvement is now expected, and any mis-steps feel amplified.