Unfortunately most of the "hard" work will be metrics massaging, redefining words and covering stuff.
But the first phase will be a lot of "security & quality" presentations to the troops, some hiring and ground prep-work so the blaming can be done when things go south.
I would like to be more positive, but I already saw this cycle too many times.
How about security being part of the requirements to keep a job instead of monetary bonus? and this has to be applied to the top, only then to the bottom.
If the person was notified of a security problem, and ignored it, or tried to sweep it under the rug, instead of taking it seriously, then absolutely fire them, but assuming no malfeasance, firing people just leads to institutional knowledge loss which would lead to worse outcomes.
We are talking executives here, it's their job to organize teams and work in a way to achieve stated objectives. This is reality for every higher-up position and one of the reasons they are the way they are, etc.
I see some devs thinking I was talking about them and getting defensive :) It's not a "we discover a bug, we fire you" situation, that's ridiculous.
I cannot speak for everyone, but in my neck of the woods there are specific deliverables like locking down server access more, removing poorly secured test accounts and older auth methods in general, locking down network in terms of what can access what, cleaning up dependencies, etc. There's a list of about 20-30 things that are to be measured automatically and driven to ~0.
>How about security being part of the requirements to keep a job instead of monetary bonus?
Because the monetary incentives are greater than the position for executives, and they are the ones to be accountable.
Of course this is a form of lip service. Of course there's way more to do. But they are making a very public statement to prioritize security. That's a positive.
A bit curious how is it worded. I wonder, will it actually improve security, or will it be metrics that are being played around actually decreasing security (e.g. Teams might stop registering/tracking issues as a way of not having registered bugs)
Wow, the first two things I thought of after reading the first few lines of that article were:
- the Peter Principle (as a loosely related concept)
and
- that the new MS diktats would immediately be gamed.
What was my surprise when I saw both those concepts mentioned in the See Also section!
I did not know about most of the other See Also points, though, except for Confirmation bias, which I had read about on Hacker News, and the Hawthorne Effect, which I had read about elsewhere, IIRC, in a Dale Carnegie or similar self-help book.
"...its Senior Leadership Team's pay partially dependent on whether the company is "meeting our security plans and milestones," though Bell didn't specify how much executive pay would be dependent on meeting those security goals."
"Perhaps they should tie executive pay to customer satisfaction?"
MS is a publicly quoted company wot sells shares. They are therefore answerable to their shareholders. Shareholders first, everyone else comes second is largely the first order of the day.
It's a bit shit when you try to factor in some sort of moral angle but that is how things are.
I think what regulators should start doing instead or on top of of inflicting monetary fines is to force companies to mint a significant amount of new shares and put them in escrow accounts, then in case some future event happen (security incident meeting some specific criteria for a catastrophic incident), the share should be put on sale at market price and used to refund the affected customers and pay for the regulatory costs.
In this way, executives and policy makers have a harder time moving on and not being accounting before the effect of their bad decisions is apparent, because the reputation for sharesholder value destruction could haunt them. Executives will be incentivized to create a better security culture.
The Ars Technica article is a lot more critical of Microsoft and provides some history. That said, it is frustrating how most of the links just link to other Ars articles.
Funny how I've heard from an Azure employee who worked with many big clients that very few among them cared about security - the incentives were just not there.
Seems like they're finally doing something about that, to set an example for the rest of the industry.
I doubt that's the case. I've been working in/near enterprise sales for quite a while now. Security is considered unglamorous table-stakes: companies won't buy your stuff because you're doing all the right things, but they'll definitely not buy your stuff if you're not.
Giant products like AWS and Azure are too big to grill about their security controls. If you try to ask an AWS rep about something, they'll direct you to their security portal where you can download a SOC2 report and a few other things. That's about all you'll get from them unless you're equally huge. The most you can really go by is their reputation. If you trust AWS, buy their product. If you don't, don't. That's all the prior research a typical < 10,000 employee business can possibly do.
My suspicion is that your friend is only talking to clients who've vetted Azure and figured "it's Microsoft: they're big so they probably know more about it than I do". It's not that they don't care. It's that there's nothing they can do about it. The people who don't already trust Azure would never have gotten as far as talking to your friend in the first place.
We're getting drowned by security checklist by clients now.
A lot of them don't make much sense for us, we primarily make a Win32 B2B program hosted by these customers themselves and a lot of the checklists are all about more generic web SaaS things (because we charge like SaaS). But the person on the other end wants all the questions answered regardless.
Seems that as long as you can put a checkmark in a box that you follow various "best practices" and whatnot, actual details don't matter. You put a checkmark in a box, you did your best.
From being on the buying side, it's likely that the person sending you that questionnaire knows a lot of it is irrelevant to your situation, but they're personally reviewing 100 vendors this year (no, seriously) and there aren't enough hours in the week for them to make exceptions for everyone.
Very often the best answer would be like:
> Q: Do you use multi-tenant databases?
> A: N/A: you'll be deploying our product on your own server.
That's actually a perfectly fine answer! The person reading it doesn't have to explain large gaps in the answers to their boss. It documents why this isn't relevant in a way their successor can easily understand next year when they're reviewing those 100 vendors as part of their annual Vendor Management Policy™ process.
It really depends on the Team. Trying to broad stroke anything about Microsoft engineering is impossible because it's a patchwork of business units and teams that rarely communicate and work together unless forced. Some Teams are very visible and have top talent on them that prioritize and think about security. Some services do not... problem is security is very much a "you're only as strong as your weakest link" kinda thing.
This is a step in the right direction to get the top-layer prioritizing security.
Couldn’t agree more. I can’t emphasize enough how BIG Microsoft is and how many dimensions of security there are. Nobody has as many attack vectors as we do. I’m pretty confident in saying that. It’s a super hard problem and nearly impossible to enforce all of them from an organizational standpoint. But this is a great step in trying to do so.
That's true - he was talking about clients though, if my memory serves well.
The main challenge he highlighted is there're no financial incentives for most companies in the industry to stay secure (unless you're a security company) - the punishment (including reputational risk) is just way too small.
How could you possibly secure anything when the other half of the company is shoving ads into the start menu and implementing "ai search" to record every keystroke and screen text into perpetuity?
Even the good people at microsoft will forever be undermined by this shit, complete demoralization and throwing their hands up to doing anything properly
They are antithetical really. AI is fundamentally about undefined behavior because we cannot define it better algorithmically (putting aside the AI algorithms themselves). Security is about avoiding such undefined behavior and only doing things that we expect. At best you have a very secured sandbox to keep the AI in, away from anything but user input and training data.
But the first phase will be a lot of "security & quality" presentations to the troops, some hiring and ground prep-work so the blaming can be done when things go south.
I would like to be more positive, but I already saw this cycle too many times.
How about security being part of the requirements to keep a job instead of monetary bonus? and this has to be applied to the top, only then to the bottom.
You should meet my director who wants everything to be covered under the rug while he focuses on empire building.
I see some devs thinking I was talking about them and getting defensive :) It's not a "we discover a bug, we fire you" situation, that's ridiculous.
Because the monetary incentives are greater than the position for executives, and they are the ones to be accountable.
Of course this is a form of lip service. Of course there's way more to do. But they are making a very public statement to prioritize security. That's a positive.
- the Peter Principle (as a loosely related concept)
and
- that the new MS diktats would immediately be gamed.
What was my surprise when I saw both those concepts mentioned in the See Also section!
I did not know about most of the other See Also points, though, except for Confirmation bias, which I had read about on Hacker News, and the Hawthorne Effect, which I had read about elsewhere, IIRC, in a Dale Carnegie or similar self-help book.
What's the percentage? What are the milestones?
Edit: The "security plans and milestones" appear to be here: https://www.microsoft.com/en-us/security/blog/2024/05/03/sec...
Security is somewhere under that umbrella. Also all the other stuff end users give a shit about that Microsoft doesn't...
"How did we do?"
"Please take a few minutes to fill out this survey on your recent interaction with Microsoft"
MS is a publicly quoted company wot sells shares. They are therefore answerable to their shareholders. Shareholders first, everyone else comes second is largely the first order of the day.
It's a bit shit when you try to factor in some sort of moral angle but that is how things are.
In this way, executives and policy makers have a harder time moving on and not being accounting before the effect of their bad decisions is apparent, because the reputation for sharesholder value destruction could haunt them. Executives will be incentivized to create a better security culture.
(https://news.ycombinator.com/item?id=40249290)
Seems like they're finally doing something about that, to set an example for the rest of the industry.
Giant products like AWS and Azure are too big to grill about their security controls. If you try to ask an AWS rep about something, they'll direct you to their security portal where you can download a SOC2 report and a few other things. That's about all you'll get from them unless you're equally huge. The most you can really go by is their reputation. If you trust AWS, buy their product. If you don't, don't. That's all the prior research a typical < 10,000 employee business can possibly do.
My suspicion is that your friend is only talking to clients who've vetted Azure and figured "it's Microsoft: they're big so they probably know more about it than I do". It's not that they don't care. It's that there's nothing they can do about it. The people who don't already trust Azure would never have gotten as far as talking to your friend in the first place.
A lot of them don't make much sense for us, we primarily make a Win32 B2B program hosted by these customers themselves and a lot of the checklists are all about more generic web SaaS things (because we charge like SaaS). But the person on the other end wants all the questions answered regardless.
Seems that as long as you can put a checkmark in a box that you follow various "best practices" and whatnot, actual details don't matter. You put a checkmark in a box, you did your best.
Very often the best answer would be like:
> Q: Do you use multi-tenant databases?
> A: N/A: you'll be deploying our product on your own server.
That's actually a perfectly fine answer! The person reading it doesn't have to explain large gaps in the answers to their boss. It documents why this isn't relevant in a way their successor can easily understand next year when they're reviewing those 100 vendors as part of their annual Vendor Management Policy™ process.
My current place, there are developers still using like node 10 and other ancient software, but god forbid you not fill out a checklist.
This is a step in the right direction to get the top-layer prioritizing security.
The main challenge he highlighted is there're no financial incentives for most companies in the industry to stay secure (unless you're a security company) - the punishment (including reputational risk) is just way too small.
Even the good people at microsoft will forever be undermined by this shit, complete demoralization and throwing their hands up to doing anything properly
Deleted Comment
Deleted Comment