Speaking from experience
1.) At their worst - many industry-prescribed standards act to move liability from the poor Visa of the world onto smaller businesses. For example, being visa in this scenario - Why should I develop a better solution to credit card/identity theft? We have PCI!! I can even help sell training out of the goodness of my heart! This, of course, stagnates the industry, both through lost leaders thinking this piece of paper will protect them or to the parasitic cottage industries that capitalize on the fact nobody knows better and takes what could have been an investment in solving fundamental security problems. Ultimately accountability goes back to Visa / SWIFT and whatever body that cba’d on fixing the problem they created.
HOWEVER
2.) At their best many of these standards invite operational rigor required when you want to move past the “3 people in a basement” stages of your startup and have to make grown-up decisions if someone is hit by a train. Furthermore, security is often considered a cost center; attaining specific certifications can be one of the few indirect ways to attribute as a measurable product differentiator in whatever space you’re in.
The author's argument that these frameworks are a decent place to start to work on the comprehensive “cyber security” strategy is excellent for the neophyte looking to understand better one of the many elements that go into running a security program, but it’s far from comprehensive. It’s sad to think, but sometimes these compliance-driven certifications often become one of the few forcing functions you have as a security engineer/leader to get someone to do something that closely resembles the right thing. I’m sure many roll their eyes reading this comment, but - I’ve had countless interactions with engineers who could give less of a shit if their product is insecure just as long as they can ship it on time and be offline by 4, and my only option is to get the legal department to chase them because we will fail an audit if they don’t change course.
Either way. I’d recommend anyone reading the article to have an open mind to understanding many of these frameworks, what they are used for, and how you might use them effectively until we figure out something better. Don’t just cargo cult “standards r bad”, “Jira is dumb”. Try to ask the broader question of why they are needed in the first place.
Where ADHD leads to problems being productive despite having the intelligence to do the job, and that jobs tend to, y'know, fire you if you're not productive, and you go homeless without money, which you get from having a job, isn't helping ADHD people be productive the same thing?
As far as your trust issues go, that's for each individual to decide. Some people are able to see past their cynicism and derive value from the products companies make.
Your attempt at framing consumer protection as trust issues undermine the emotional scope of being a human being. You should seriously evaluate why someone who asks for the right to privacy is being framed in your mind as having a “trust issue.”
AI, in its current iteration as a centralized technology, will encourage future rent-seeking by incumbents and impact those dependent on it. Open access to models and locally running offline AI technology will be critical to its long-term success.