The solution is complete zero trust and distrusting the network in organizations. You should treat the internal network as external -- hostile. Google does this. They were the first ones to widely adopt zero trust with BeyondCorp and there has not been a Google internal organizational breach since Aurora (which made them adopt BeyondCorp, what they call zero trust).
You have completely managed endpoints, strong hardening of the endpoint and complete inventorization of all the resources in the organization. You have certificates installed onto each device. You have an ACL engine that determines whether a user should get access to a particular resource. You can use deterministic lists and also incorporate heuristics to detect anomalies (working hours, etc). All Google internal apps are internet-facing. You can open them, get redirected to the SSO portal. Come and do try to get in. You will not.
Many of these security problems are solved. You just need to implement the solutions.
Google makes zero-trust work by having a highly "centralized" or "uniform" tech stack all the way from tooling, to hosting, to infra. So everything defaults to zero-trust and it's not something you would need to think about setting up.
Most big organizations have built up their internal/external tech over decades, with large parts of it being essentially "mothballed", and high degree of heterogeneity stemming from tech changes over time/acquisitions/departments having flexibility in what tools and design they can use. Shifting to zero-trust requires a lot of migration work across all of this, "training" ie figuring out how to get stubborn IT people to buy in to the new way of doing things, and most likely a shift to the "centralized" kind of model that Google uses.
Even if the first two are funded, that third "centralized"/"uniform" model can be very expensive. One of the reasons Google has to deprecate things so much is that the centralized model requires constant migrations and breaking upgrades to keep things running, which makes it so "mothballing" isn't a thing: you either have enough people assigned to handle the migrations, or turn it down.
I agree that zero-trust is the best security model and it solves many problems. But I guess I'm also saying it's much easier said than done. With my startup I want to solve a lot of these kinds of problems (ie introducing "uniformity" that follows best practices) for my customers, but it's inevitable that some point a customer will ask for the ability to "turn off zero-trust and allow for IP whitelisting" - is it worth it to close a potentially big deal? It's also probable that any reasonably successful company will at some point perform an acquisition involving a company without a zero-trust model - is that reason to cancel the acquisition?
> Shifting to zero-trust requires a lot of migration work across all of this, "training" ie figuring out how to get stubborn IT people to buy in to the new way of doing things[...]
You're right, most times organizations need a fire lit underneath them to change, for Google, it probably was the NSA annotation "SSL added and removed here :^)" on a slide showing Google's architecture from the Snowden leaks.
> You have completely managed endpoints, strong hardening of the endpoint and complete inventorization of all the resources in the organization. You have certificates installed onto each device. You have an ACL engine that determines whether a user should get access to a particular resource.
None of those are “solved” for any mid or large-sized enterprise where tech isn’t their code competency. In fact, I’d say most of these are insurmountably hard.
This is a “draw the whole owl” kind of response. Its very well to say that you can do things differently, but imagine Shaw Industries (22k employees, largest carpet/flooring manufacturer in the USA) doing any of that.
The first step to saying you're not doing any of that is saying you're not doing it.
I work at an older company and it has successfully moved an number of apps to this model.
Some will take another decade, but many are there.
I think I agree with this conclusion. But I work in a Shaw-like enterprise (only the products are more mundane than flooring). What are the hurdles we’d see if we tried it? What processes and practices are we likely using, that would break under the zero trust model?
More of a "pay someone else to do it" situation then. And the question is how much do they value security, and can they afford it without killing their business.
I have found that thinking a security problem is "solved" is a big warning that you're at risk. There's no such thing as perfect security in anything. If you adopt the mindset that you're "safe" in some sort of absolute way, you stop looking very hard for security breaches and won't catch the one that will, sooner or later, happen.
Indeed, zero trust is a powerful mindset but only an incompetent organization willingly opens all internal resources to the Internet for no good reason.
Another important philosophy is defense in depth: Just because you use zero trust principles internally doesn't mean you shouldn't still put a big freaking moat around your environment.
That is a bit hypocritical of Google though since they do have scanned mails of their users for example. In that way they certainly did implement zero trust, but maybe here it has another meaning.
And I don't think such an architecture fits every company. Most (non-software) tech companies suffer under simple social engineering, scam mails and giving third parties their credentials. A threat is also economic espionage in all its forms.
Google certainly has other security concerns as well. Internal whistleblowers and maybe activist circles that run counter to the vision of management. For these problems their architecture might make sense, but it doesn't mean every company has the same threat vectors.
Of course security problems can be solved, but the infrastructure needed isn't trivial and many software stacks for engineering just don't allow for third party auth anyway.
Many developers (software or not) also shudder about their "managed endpoints". Works for Google obviously, but they are a special case here.
Much more effective here is sensible network segmentation. You don't need fancy auth services for that, just classic IT with a little sense for real threats. "Everything facing the internet" certainly is a very specific strategy that cannot be generalized.
For most companies Zero Trust is strong device management, cert in TPM, buy okta service and then buying an appliance that you put between users and services and then cut off direct user access. Can still use vpn or expose the appliance to the internet.
I'm half with you, but riddle me this: in a ZT environment, every request needs to be accompanied by some verifiable assertion of identity and authorization. In this case, and others we've seen recently, the identity provider themselves has been compromised. For example because an attacker has obtained signing keys that allow them to effectively masquerade approval from.the identity provider. So even in a ZT environment, isnt it game over at that point?
It seems that we have a situation where all out trust is in the identity provider now, and we suffer when that provider is compromised.
The misaligned incentives between security and profit, especially in public companies, is not really a fixable problem without a massive cultural shift. I'm not sure at this point what could even trigger one.
I've always dabbled in cybersecurity, taking on the hat in various roles over the years but have refused to go full time into it due to what I have personally seen in the industry - an overwhelming focus on compliance rather than actual good security practices, and the compliance standards are either very lacking or poorly enforced.
This is exactly it. There is no incentive to prioritise security. It is not visible to customers, except in terms of compliance, most likely a check-list approach.
I think it needs a massive cultural shift, but from customers. If customers were willing to evaluate security (consumers cannot, but enterprise can) properly, demand binding assurances, and make buying choices accordingly industry would respond.
Of course MS is too strongly entrenched in the desktop market for this to be completely effective.
When I first left offensive security consulting and joined an internal defensive team, a wise ex-agency person said to me "In product development, the first things to often get axed are security, and performance. They are invisible to the user, until they aren't, and rarely do failures in those areas end a company."
Granted this was prior to ransomware really blowing up, but even that itself is a different threat model that doesn't mean your product has to be good at security.
The purpose of using Microsoft products in an office environment is so that your office can be run with as much personal computer enhancement as you originally realized when you first effectively replaced the traditional office machines or more-labor-intensive tasks with software-powered substitutes.
Which all occurred way before any of the things like "single-sign-on" got popular among those who didn't seem to know any better. The second this appeared it was easily recognized as one of the many consumer/entertainment features that must be disabled across every bit of any serious corporate network.
Also best disabled on any home computer before it is allowed to touch the internet.
There was no forthcoming mitigation, all Microsoft leadership could do was throw up their hands, after all there were unsurmountable reasons why such a threat could not be overcome.
>it required customers to turn off one of Microsoft’s most convenient and popular features:
Like any other office no-brainer:
>the ability to access nearly every program used at work with a single logon.
It’s a market. There’s no demand for security. How often has the average Joe has one of their online accounts hacked or credit card details stolen in 2024?
Obviously the most effective way of incentivizing companies to focus on security is NSA assembling a team to hack important companies, create real harm and accompanying press releases. Ah sorry, I meant Russian hacker news.
I may be off, but to me as an affected outsider (user) the continuing insistance of using passwords after decades (yes, several decades) of problems and proven vulnerability, then to 'mitigate' with putting second line of 'defense' on the very fragile and non-transparent smarphone infrastructure instead of doing real reforms is a sign of not giving a faint fack.
> an overwhelming focus on compliance rather than actual good security practices
Yes, this is sad and mostly a waste of time.
However (and perhaps it is what you meant) this is a direct reaction to the lack of that cultural shift towards caring about security.
So security teams are mostly left with two choices. One, argue for building secure products because security matters (and be laughed out of the room). Or two, argue for compliance with what the auditors require and that at least move the needle a tiny bit toward security (sometimes).
I read a quote once that a CISO's job was to do enough public talks that when their company inevitably got popped because nobody values security, they've got their next job lined up already.
Did you buy the more expensive lock for your house? Are your doors fortified, if they are why isn't the steel an inch thicker?
Do you also choose having money over security? Sounds like the government also chose having a more productive work force, etc, over higher costs and lower productivity.
In our analysis we determined that if the steel doors were thicker it would hinder our team of ex-special-forces security guards from operating their bazookas effectively in the event a suspicious person is spotted. Unfortunately it’s all too common that potentially dangerous fugitives on the run are trying to blend in as “mail carriers” and “neighbors on a walk”. Anyway, the auditors relented on the steel door issue, but then hammered us on why we didn’t have any tanks moving in formation in the front yard as a deterrent. In fairness, the FedEx guy made it all the way to our front door in two separate incidents last week. So the auditors have a point.
Not sure what point you are trying to make - for one, I don’t keep anything valuable in my house. Two, I have adequate security measures for the threats I am likely to deal with - I have cameras, locks on all windows and doors, and I have alarms.
The rough security/compliance world equivalent is a checklist that says “Do you lock your doors every night?” and you say “yea I do” regardless of whether or not you even have a lock or what kind it is, and they say “ok cool.”
It’s a false dichotomy that you need to choose between security and productivity.
It's common knowledge clipboard audits of the perfunctory type skew towards security theater and are the most likely type to be performed because they're cheap and mostly automateable.
OTOH, a few SCAP baselines I've seen contain good shit.
Standardization and change control with deep, vigilant internal and external review help because infosec is a cross-cutting concern requiring holistic, defense-in-depth controls, checks, and application. Also, avoid a Tragedy of the Commons scenario originating from an attitude of "it's everyone's responsibility" by having a dedicated security team with the resources, authority, and accountability to pushback against unsafe practices, and monitoring and remediating problems.
There is nothing wrong with processes per se, From civil engineering to automotive to aviation there is a tangible outcome to all the laborious audits and paperwork.
These systems are lot safer after regulations were put in place however onerous and ineffective they seem
I always wonder if software is different than physical construction, or if software is just less mature of a discipline.
In software, we can’t estimate projects accurately and consistently. We have to build a few to throw away just to get a better (yet still incomplete) picture of the problem we’re trying to solve.
Imagine if the people building your house had to build half of it and then start over. Maybe twice.
That never happens in physical construction. Maybe something has to be redone because someone made a mistake, but almost never due to not understanding the problem. So what’s different about software?
In the profit-center view, everything is either a cost center or a profit center. And it is nearly impossible to get anyone to truly care about a "cost center".
In my experience, the conflict in many bigger orgs isn't even on the cost vs profit axis, it's on the tangible vs non-tangible axis. It's a lot easier for middle managers to show they did well if they deliver customer impacting features than a nebulous "improved security". This is item true even when higher up management actually wants to invest in security.
> The misaligned incentives between security and profit
Cynically, there is no incentive for security; there is ONLY profit. Security comes into play only where it can increase profit, it's a second order effect. (Of course, there are legal and regulatory pressures here for it as well.)
> an overwhelming focus on compliance rather than actual good security practices
I'm an application security engineer. I find that it depends widely on the company. You're right that compliance is purely just a checklist and does and doesn't actually do much for security. At best, it slows down a determined internal attacker. ie, a developer can't install a back door since code reviews are enforced by SCM before merging is allowed. But all the ISO-27001 and SOC-2 audits in the world won't prevent trivial attacks like SQL injection.
So the actual security depends on how much buy-in the AppSec team can get from project management. I've had companies where I point out an obviously exploitable flaw that can easily cause DoS, and with some determination could get RCE, and I get radio silence. Others, I point out a flaw where I say "It's incredibly unlikely to be exploitable, and attempts to exploit would require millions of requests that would raise alarms, but if someone is determined enough..." and project management immediately assigned the ticket and it was fixed within a week.
I can tell you one thing that's not doing any favors is overly zealous penetration testers that feel like they need to report SOMETHING so they invent something that's not an issue. For example, in one app I worked on, after logging in, the browser would make an API call to get information about the current user, including it's role. The pentester used Burp Suite to alter the response to the call to change the role to "admin", and sure enough, the web page would show the user role as "admin", and so the pentester reported this as a privilege escalation. They clearly didn't go on to the next step of trying to do something as admin, though, because if they did, they'd see the backend still enforces proper RBAC. Changing that role to "admin" essentially just made all the disabled buttons/functionality in the web app light up, but trying to do anything would throw 403 Forbidden.
But I digress...
> The misaligned incentives between security and profit, especially in public companies, is not really a fixable problem without a massive cultural shift.
The EU seems to have figured it out, but the USA is a hypercapitalist hell-hole. It's such a shame that the population is mostly convinced that any regulation is bad and an attack on freedom. I roll my eyes at the Libertarians that claim that the Free Market(tm) will punish bad actors while the worst actors are rising to the top. Bad acting is profitable.
We run an SaaS and we get a ton of these. Most of these are absolutely inane and complete waste of our time having to look through their poorly written email begging for 50/100 USD payouts.
We pejoratively refer to them as "Burp Babies", the equivalent of "script kiddies".
I think that when companies sell to the government, there is so much money to be made, and such a huge PR boost, that they are incentivized to cover up the naughty bits (a certain airframe manufacturer, comes to mind).
It can mean anything from concealing slightly embarrassing stuff, to massive, systemic, deliberate, fraud; sometimes, the whole spectrum, over time.
It often seems to encourage a basic corrosion of Integrity and Ethics, at a fundamental cultural level.
When leaders say "Make Security|Quality a priority," but don't actually incentivize it, they set the stage.
For example, routinely (as in what is done every day) rewarding or punishing, based on monetary targets, vs. punishing one or two low-level people, every now and then (when caught), says it all. They are serious about money, and not serious at all, about Security|Quality.
If you want to meet a goal, you need to incentivize it. Carrots work better than sticks. Sales people get a lot of stress, and can get fired easily, but they can also make a great deal of money, if they succeed. Security people don't get fired, if they succeed, and get fired, if they don't. Often, the result of good work is ... nothing ... No breaches, no disasters, no drama. Hard to measure, as well. How to quantify an absence?
Sales: Lots of carrot, and the same stick as everyone else gets. Easy to measure, too.
Security: No carrot. All stick. The stick can be a really big stick, too; with nails driven through it.
I'm really not sure what the answer is, but it's cultural, and cultural change is always the most difficult thing to change.
I think this is sort of it but I don't think it's the carrot that's the problem here. I believe it's the process and yeah ultimately the culture.
I don't think you want sales concerned about security, their focus should and only be on growth. The problem is if you don't give jurisdiction and power to the other side to actually say no this priority (security fix) goes in before work is done on this new feature, then you have an imbalanced system.
If the project manager who is incentivized toward growth is the decision-maker for deciding what is prioritized, well of course naturally you'll have the PM choosing growth over security.
Process needs fixing, give more agency and jurisdiction to the other side to effect change. It's not like security doesn't see what the issues are, it's just the fixes are not prioritized and the culture and process isn't balanced between both.
> I think that when companies sell to the government, there is so much money to be made, and such a huge PR boost, that they are incentivized to cover up the naughty bits (a certain airframe manufacturer, comes to mind).
>
> It can mean anything from concealing slightly embarrassing stuff, to massive, systemic, deliberate, fraud; sometimes, the whole spectrum, over time.
>
> It often seems to encourage a basic corrosion of Integrity and Ethics, at a fundamental cultural level.
>
> When leaders say "Make Security|Quality a priority," but don't actually incentivize it, they set the stage.
>
> For example, routinely (as in what is done every day) rewarding or punishing, based on monetary targets, vs. punishing one or two low-level people, every now and then (when caught), says it all. They are serious about money, and not serious at all, about Security|Quality.
>
> If you want to meet a goal, you need to incentivize it. Carrots work better than sticks. Sales people get a lot of stress, and can get fired easily, but they can also make a great deal of money, if they succeed. Security people don't get fired, if they succeed, and get fired, if they don't. Often, the result of good work is ... nothing ... No breaches, no disasters, no drama. Hard to measure, as well. How to quantify an absence?
>
> Sales: Lots of carrot, and the same stick as everyone else gets. Easy to measure, too.
>
> Security: No carrot. All stick. The stick can be a really big stick, too; with nails driven through it.
>
> I'm really not sure what the answer is, but it's cultural, and cultural change is always the most difficult thing to change.
> “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security,” the company’s CEO, Satya Nadella, told employees.
Satya's model of making security a priority at Microsoft:
- Cram ads in every nook and corner of Windows. Left, right, centre, back, front, everywhere. What else is an operating system for?
- Install a recorder which records everything you do. For the benefit of users of course - you know, what if a user missed an ad and wants to go back and see what they missed.
- Send a mail to your employees and tell them "Do security". Mission accomplished - Microsoft is now the most secure platform.
The Microsoft bribes scandal broke not too long after I had to take the "hey don't do bribes" training at Microsoft.
That event really drove home for me the fact that all of the trainings, emails, processes, etc. are mostly plausible deniability. There are people who care about security at MS. I know, I've met them, but for the most part all of this exists so that Satya can plausibly say in court or in front of congress, "well we told them to do security better. This is clearly the fault of product teams or individual contributors, not Microsoft policy and incentives."
I dunno, that’s a pretty cynical take. Isn’t it just as plausible that they became aware of the bribes internally and were trying to curtail them when the scandal broke out? Or maybe the “don’t do bribes” training actually worked enough for someone to whistleblow even if official internal channels failed? Those who are doing wrong often try to stymie others from making positive changes out of fear, greed, etc.
Edit: I just want to add that there are things to be cynical about - I’m not completely naive. If it’s your legal department heading up the training then you can be pretty sure that there was a cause for it.
Microsoft has for over two decades been one of the largest and most sophisticated employers of security talent in the industry, and for a run of about 8 years probably singlehandedly created the market for vulnerability research by contracting out to vulnerability research vendors.
Leadership at Microsoft is different today than when the process of Microsoft's security maturation took place, but I'll note that through that whole time nerd message boards relentless accused them of being performative and naive about security.
To be fair, it's not really possible to come up with good policy to handle this at scale. It would be too intrusive to require employees to divulge their private financial accounts (and near impossible to audit that the employee has truly divulged all their financial accounts), and the more internal controls you put in place, the slower the deal-making gets, with no guarantee of good behavior.
Eh. For the most part, the trainings can be taken at face value. Even if the management's dealings with governments and partners are questionable, no company wants random employees accepting personal kickbacks from vendors.
There's a liability avoidance component to trainings, but mostly for non-business misconduct. For example, for sexual harassment, the company will say they tried everything they could to explain to employees that this is not OK, and the perpetrator alone should be financially liable for what happened. That defense is a lot less useful in business dealings where the company benefits, though.
To be fair to Satya, every leader should be judged on what they do not what they say. This isn't a Microsoft or Satya problem, pick a large corporstion and you'll find examples of this behavior everywhere.
Words in an email hold absolutely no weight, when leaders choose to trade security for something else that's all employees need to know.
In particular when you need to answer to shareholders and can be voted out of your position/company. I don't pretend that Microsoft's past hasn't been an issue, but if we compare the past to present, Satya has had somewhat of a positive impact (although know there's a lot behind the scenes that I'll never know about, as well as most). It's good to be critical of every company, otherwise the end users get rolled over.
I have no broad evidence of this, but I suspect that the more beginner-friendly Linuxes are guilty of a lot of the sins that you laid out here. I seem to remember some controversy with Canonical recording your searches when hitting the super key, and Ubuntu having Amazon ads built in by default.
People who love to geek out about computers can of course install Arch or Gentoo or NixOS Minimal and then audit the packages that they're installing to see that there's no obvious security violations, but it's unrealistic to think that most non-software-engineer people are going to do that.
I really don't know how to fix this problem; there will always be an incentive for Microsoft (and every other company) to plaster as many ads as they think that can get away with, as well as collecting as much data as possible. I don't know that I would support regulation on this, but I don't know what else could be done.
> I seem to remember some controversy with Canonical recording your searches when hitting the super key, and Ubuntu having Amazon ads built in by default.
It was also other way around with Microsoft. If you deploy Ubuntu VM in Azure, they contacted you in LinkedIn to offer commercial support.
Debian is a perfectly reasonable choice for casual linux users. Ubuntu's supposed usability improvements over Debian are greatly exaggerated. It's mostly just marketting.
> can of course install Arch or Gentoo or NixOS Minimal and then audit the packages that they're installing to see that there's no obvious security violations, but it's unrealistic to think that most non-software-engineer people are going to do that.
It's a fantasy to think that random devs can audit kernel/security code. No single person can. Too many lines of code to audit (that you didn't write yourself). Even if you hired a team, by the time the team does the audit, the goalposts have moved with new source code.
It's not surprising when a linux distribution was taken over by a capitalistic firm, it decided to forgo good values, and instead prioritized profits over everything else.
> I really don't know how to fix this problem
Stop using software made by companies that do bad things. Improve the software that doesn't.
> you know, what if a user missed an ad and wants to go back and see what they missed.
Unrelated, and maybe this actually exists, but with the rise of LED billboards, there have been more than one occasion where a billboard was displaying something and it cycled too fast, or the print was too small.
I would actually be interested in visiting the billboards website that lets me click on the geographical billboard location and show me what it’s been showing.
> you know, what if a user missed an ad and wants to go back and see what they missed
I have meetings with adtech guys and this gets pitched every time. Along with "a way to save ads so you can watch them again at home later!" And "alexa enable ads that you can talk to!"
- I do not see ads in “every nook and corner of Windows” and neither do you.
- I do not have a recorder installed on my Windows machines and neither do you.
- no one qualified to make that statement has said that Microsoft is the most secure platform.
It is so hard to listen to anyone who exaggerates at this level. If anything, it drives interest in Microsoft because these are all obviously false statements and some readers will wonder what your true motive is. You just raise suspicion in yourself.
At least you used a new account to distance yourself from any other identities you may have here. In fact I would say that was the only smart move in your entire comment.
Anyway, this is a damning revelation by the whistleblower and I hope Microsoft feels a good amount of pain because of it. NEVER make any decision with money as your sole input. It will always be a bad decision, and it’s just a matter of time until that decision bites you or someone you care about.
You're getting downvoted for your tone most likely, but I agree with this statement:
> - I do not see ads in “every nook and corner of Windows” and neither do you.
As a professional "Windows user" logging 8+ hours a day on my PC, I see no ads. Unless you count "OneDrive" ads which in that case, would mean I see iCloud ads on my iPhone too. I'm fine with classifying these as ads, but I'm certainly not seeing them "in every nook".
Are these ads only bundled with a certain versions of Windows?
As per usual, executive platitudes around "security first" don't matter.
If you pay and promote people for features, and don't reward security culture, people are not dumb: they and the management layers will optimize for that.
I don't know how to design incentives to solve for this, but this is always going to be the way it is.
This is a non-solution, and automatic "head rolling" and punishments will only lead to reducing the actual meaningful experience accumulation - the mean time between major breaches like this is long enough and variable enough that the next person would be likely equally incompetent, inexperienced and inattentive.
There's no easy solution, because it's inherently very difficult problem - making a correct trade-off between security and everything else for the society, and determining what exact line needs to be drawn, are inherently extremely difficult problem, and no amount of laws and punishments will help with finding the right balance.
I do like what CISA seems to be trying to do, and I think they can do a lot more here - I think we need CSRB or some similar org to get to a place where NTSB is - I think the key value of NTSB for humanity is ensuring that some of the critical knowledge around safety incidents get accumulated and shared across. Right now, learnings from key infosec incidents are not broadly shared in any reasonable timeframe, if ever, and so we repeat the mistake over and over again.
Usually, a feature is included in a product if the marketing show that it will grow the business more than the cost of the feature. Maybe we can try the same idea ?
"We identified this vulnerability, and it will impact X % of our customer and Y % will leave (+ reputation damage) so we will loose BIGNUMBER $. However, we can correct it for SMALLNUMBER $ in Z days. Decision ?"
> In the months and years following the SolarWinds attack, Microsoft took a number of actions to mitigate the SAML risk. One of them was a way to efficiently detect fallout from such a hack. The advancement, however, was available only as part of a paid add-on product known as Sentinel.
So you sell me a submarine with screen doors, avoid fixing it for years, cripple internal processes that would fix it, and then you want to charge me for a water alarm? That's chutzpah.
Also identification is one thing, but good security should mean the vulnerability didn't occur in the first place.
Then you also need to get budget for identifying vulnerabilities.
After that you need budget to research how costly the vulnerability could be.
But before getting those budgets you need budget again to propose all of that and data to prove its value.
Unless you use your own time to do all of that or accidentally stumble upon something.
I think the only realistic way to get any sort of budget is if a deep enough incident actually happens. And this will only last maybe for a year until most of the decisionmakers have been rotated with new ones wanting to only deliver again.
There's a pretty big caveat in this story which I feel is being looked over:
"Disabling seamless SSO would have widespread and unique consequences for government employees, who relied on physical “smart cards” to log onto their devices. Required by federal rules, the cards generated random passwords each time employees signed on. Due to the configuration of the underlying technology, though, removing seamless SSO would mean users could not access the cloud through their smart cards. To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards."
The U.S. Government (USG) is one of MSFT's largest (if not the largest) customers. The user base is enormous, and the AD footprint equally so. I have experience working in this space; the user and roles management is a nightmare with comprimised credentials, locked out accounts, and the like. Given the nature of their work, it's a constant target.
The USG has been attempting to move everyone to smart card auth to help mitigate some of these issues. Removing passwords and turning everyone to two-factor auth would greatly reduce their attack surface. They've been pursuing this for years.
So along comes this guy, and he says that, as part of this fix, just tell all of their customers to turn this off.
I don't dispute the danger of the original SAML flaw. But I think Harris is unfairly judging the rest of MSFT's reaction here. He's asking them to turn off two-factor auth across entire agencies. I might as well hand an attacker a set of credentials because that's the amount of effort and time they would need to phish a set off someone.
To reiterate, the flaw in AD FS was bad and needed immeditate attention. But the short term mitigation Harris proposes would drastically hurt their security and open tons of customers to attacks of the very sort they were trying to prevent. This story is spun as another instance of a company not caring about security, but I see a "whistleblower" who had a very narrow view of their customers overall security posture, and threw a fit when this was pointed out to him.
"To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards.
Harris said Morowczynski rejected his idea, saying it wasn’t a viable option."
I would fully expect most government agency Info Sec Systems Managers (ISSMs) to say the same.
Obviously, nobody is going to outright admit they put profits above security; indeed, they will often state the opposite. But their closely-held beliefs will shine through when it comes time to make decisions and the outcomes of those decisions are exposed to their customers and to the public.
I genuinely think Proton as a company would prefer to cease to exist rather than offer insecure products. In fact there's a lot of offerings I would use (and pay more for) and they could make but choose not to (like a calendar that is not over an airtight protocol and could integrate with my regular calendar clients).
They are rare, but Mullvad comes to mind immediately. They have made several decisions that directly impacted their bottom line (no recurring subscriptions where they need to keep the customer's credit card on file) to the benefit of their customer's security.
I'm sure there are some companies that realise security (or rather the critical lack of some important aspect of it) can impact profits, but that depends a lot on who their customers are too. Ultimately, if the customers who pay for a vendor's products and services don't value it, then the vendors won't value it either, short of any regulatory or legal requirements that might compel them otherwise. However, given that many large organizations (including governments) are Microsoft customers, it's strange to see in this case. Maybe there's a kind of "it can't happen to us" or "nobody will find out about it" arrogance going on, but they must now be seeing that the reputational damage is likely to have negative impacts, including hurting future profits, down the road.
Isn’t there a point when a company becomes so big and so impactful to multiple layers of our life, that it should be impossible for them to continue focusing on profit alone?
I’m not talking about regulation per se, but holding humans in charge of such corps more accountable.
I don't think it's going to happen unless we decide to nationalize private services that are vital to people.
Why don't we have a public maps system, or a content sharing platform? Services like google maps/search or youtube by now are part of the infrastructure of our society.
The same way as roads/railways or energy production are publicly owned in many countries the same should happen for digital services. In good parts of Europe railways are publicly built and maintained while the trains are privately owned.
Any company with sufficient size will fail to incentivise the things they claim at the top, unfortunately the impacts of decisions (especially during austerity) are poorly understood, so even the supposedly best intending will fail once you reach a size
This isn't about Microsoft, per se. This is about the fact that there's no risk for companies who do, even if they're bidding for government work. Hopefully whistleblowers making these things public will lead to the public putting pressure on their elected officials to actually make some regulations with teeth in this area. I'm not holding my breath, but it is something I consider in the voting booth.
Imagine a major bridge that was built by a contractor. A internal safety inspector repeatedly warned his supervisors of structural deficiencies that could lead to the collapse of the bridge. Furthermore, in the pass of time two external sources publicly warned about the issue, but the company downplayed the importance.
Finally, the bridge collapses. It becomes evident that the company did nothing about the issue because it didn‘t want to loose contracts selling more flawed bridges.
The public would justifiably go nuts, and there would be legal consequences for everyone involved.
What is different in our industry that companies (and managers) get away with such malice?
Here in Norway a bridge built with known structural deficiencies did in fact collapse[1], and basically nothing has happened except tax payers get to pay even more for a new bridge.
Unless enough lives are lost, people generally don't care that much it seems.
I'm not sure if this would line up with the Dunbar number or something similar, but it sure seems reasonable that societies and centralized power should never grow beyond the scale where people stop caring.
If the public is expected to keep government and corporstions in check but the public doesn't care, it can only end poorly.
>What is different in our industry that companies (and managers) get away with such malice?
Software isn't immediately life threatening. That's why it's all thr wild west outside of medical and aerospace. While it sucks to have PI leaked to the internet, you do have time to at least take action compared to a door in an airplane coming off.
>What is different in our industry that companies (and managers) get away with such malice?
Lack of professional licensure that binds you to state regulation with jail time as one of the stated punishments besides financial liability.
Heh, the government could start effecting change by mandating licensure and sign-offs by licensed individuals when contracting for software products sold to the government.
Wasn't there something a bit like that with the Morandi bridge that collapsed in Italy?
(There was definitely something like that with the Mottarone cable car that had been running for years with the safety catch disabled. When the tow-rope snapped, wiht no catch, the cabin rushed down and killed everyone on board.)
Management that knowingly chooses to ignore a major issue should be charged with criminal negligence. The creation of the bug is a common and difficult to avoid mistake. But once it has been found, choosing not to change it despite being warned if the consequences makes you responsible for those consequences.
You have completely managed endpoints, strong hardening of the endpoint and complete inventorization of all the resources in the organization. You have certificates installed onto each device. You have an ACL engine that determines whether a user should get access to a particular resource. You can use deterministic lists and also incorporate heuristics to detect anomalies (working hours, etc). All Google internal apps are internet-facing. You can open them, get redirected to the SSO portal. Come and do try to get in. You will not.
Many of these security problems are solved. You just need to implement the solutions.
Most big organizations have built up their internal/external tech over decades, with large parts of it being essentially "mothballed", and high degree of heterogeneity stemming from tech changes over time/acquisitions/departments having flexibility in what tools and design they can use. Shifting to zero-trust requires a lot of migration work across all of this, "training" ie figuring out how to get stubborn IT people to buy in to the new way of doing things, and most likely a shift to the "centralized" kind of model that Google uses.
Even if the first two are funded, that third "centralized"/"uniform" model can be very expensive. One of the reasons Google has to deprecate things so much is that the centralized model requires constant migrations and breaking upgrades to keep things running, which makes it so "mothballing" isn't a thing: you either have enough people assigned to handle the migrations, or turn it down.
I agree that zero-trust is the best security model and it solves many problems. But I guess I'm also saying it's much easier said than done. With my startup I want to solve a lot of these kinds of problems (ie introducing "uniformity" that follows best practices) for my customers, but it's inevitable that some point a customer will ask for the ability to "turn off zero-trust and allow for IP whitelisting" - is it worth it to close a potentially big deal? It's also probable that any reasonably successful company will at some point perform an acquisition involving a company without a zero-trust model - is that reason to cancel the acquisition?
You're right, most times organizations need a fire lit underneath them to change, for Google, it probably was the NSA annotation "SSL added and removed here :^)" on a slide showing Google's architecture from the Snowden leaks.
None of those are “solved” for any mid or large-sized enterprise where tech isn’t their code competency. In fact, I’d say most of these are insurmountably hard.
This is a “draw the whole owl” kind of response. Its very well to say that you can do things differently, but imagine Shaw Industries (22k employees, largest carpet/flooring manufacturer in the USA) doing any of that.
I have found that thinking a security problem is "solved" is a big warning that you're at risk. There's no such thing as perfect security in anything. If you adopt the mindset that you're "safe" in some sort of absolute way, you stop looking very hard for security breaches and won't catch the one that will, sooner or later, happen.
Lost me there after two words! There is never a THE solution ... ever. As any engineer will tell you: "best efforts and here is why ..."
Zero trust is a philosophy and quite a good one in my opinion but it isn't a solution.
I suggest you stop thinking in terms of (absolute) solutions and perhaps think in terms of philosophies and good practices.
Another important philosophy is defense in depth: Just because you use zero trust principles internally doesn't mean you shouldn't still put a big freaking moat around your environment.
And I don't think such an architecture fits every company. Most (non-software) tech companies suffer under simple social engineering, scam mails and giving third parties their credentials. A threat is also economic espionage in all its forms.
Google certainly has other security concerns as well. Internal whistleblowers and maybe activist circles that run counter to the vision of management. For these problems their architecture might make sense, but it doesn't mean every company has the same threat vectors.
Of course security problems can be solved, but the infrastructure needed isn't trivial and many software stacks for engineering just don't allow for third party auth anyway.
Many developers (software or not) also shudder about their "managed endpoints". Works for Google obviously, but they are a special case here.
Much more effective here is sensible network segmentation. You don't need fancy auth services for that, just classic IT with a little sense for real threats. "Everything facing the internet" certainly is a very specific strategy that cannot be generalized.
It seems that we have a situation where all out trust is in the identity provider now, and we suffer when that provider is compromised.
Dead Comment
I've always dabbled in cybersecurity, taking on the hat in various roles over the years but have refused to go full time into it due to what I have personally seen in the industry - an overwhelming focus on compliance rather than actual good security practices, and the compliance standards are either very lacking or poorly enforced.
I think it needs a massive cultural shift, but from customers. If customers were willing to evaluate security (consumers cannot, but enterprise can) properly, demand binding assurances, and make buying choices accordingly industry would respond.
Of course MS is too strongly entrenched in the desktop market for this to be completely effective.
Granted this was prior to ransomware really blowing up, but even that itself is a different threat model that doesn't mean your product has to be good at security.
Where i work, IT is outsourced and decision to buy most of the SW is made by managers who have no idea about computers.
Which all occurred way before any of the things like "single-sign-on" got popular among those who didn't seem to know any better. The second this appeared it was easily recognized as one of the many consumer/entertainment features that must be disabled across every bit of any serious corporate network.
Also best disabled on any home computer before it is allowed to touch the internet.
There was no forthcoming mitigation, all Microsoft leadership could do was throw up their hands, after all there were unsurmountable reasons why such a threat could not be overcome.
>it required customers to turn off one of Microsoft’s most convenient and popular features:
Like any other office no-brainer:
>the ability to access nearly every program used at work with a single logon.
Duh.
The problem is more reactive environments take a Russian Roulette gamble on potentially unrecoverable catastrophes before taking action.
(Proactivity is more expensive than clicking a seatbelt.)
Obviously the most effective way of incentivizing companies to focus on security is NSA assembling a team to hack important companies, create real harm and accompanying press releases. Ah sorry, I meant Russian hacker news.
Many big, famous firms (especially Microsoft) would not exist
Yes, this is sad and mostly a waste of time.
However (and perhaps it is what you meant) this is a direct reaction to the lack of that cultural shift towards caring about security.
So security teams are mostly left with two choices. One, argue for building secure products because security matters (and be laughed out of the room). Or two, argue for compliance with what the auditors require and that at least move the needle a tiny bit toward security (sometimes).
Deleted Comment
Do you also choose having money over security? Sounds like the government also chose having a more productive work force, etc, over higher costs and lower productivity.
In our analysis we determined that if the steel doors were thicker it would hinder our team of ex-special-forces security guards from operating their bazookas effectively in the event a suspicious person is spotted. Unfortunately it’s all too common that potentially dangerous fugitives on the run are trying to blend in as “mail carriers” and “neighbors on a walk”. Anyway, the auditors relented on the steel door issue, but then hammered us on why we didn’t have any tanks moving in formation in the front yard as a deterrent. In fairness, the FedEx guy made it all the way to our front door in two separate incidents last week. So the auditors have a point.
The rough security/compliance world equivalent is a checklist that says “Do you lock your doors every night?” and you say “yea I do” regardless of whether or not you even have a lock or what kind it is, and they say “ok cool.”
It’s a false dichotomy that you need to choose between security and productivity.
OTOH, a few SCAP baselines I've seen contain good shit.
Standardization and change control with deep, vigilant internal and external review help because infosec is a cross-cutting concern requiring holistic, defense-in-depth controls, checks, and application. Also, avoid a Tragedy of the Commons scenario originating from an attitude of "it's everyone's responsibility" by having a dedicated security team with the resources, authority, and accountability to pushback against unsafe practices, and monitoring and remediating problems.
These systems are lot safer after regulations were put in place however onerous and ineffective they seem
I always wonder if software is different than physical construction, or if software is just less mature of a discipline.
In software, we can’t estimate projects accurately and consistently. We have to build a few to throw away just to get a better (yet still incomplete) picture of the problem we’re trying to solve.
Imagine if the people building your house had to build half of it and then start over. Maybe twice.
That never happens in physical construction. Maybe something has to be redone because someone made a mistake, but almost never due to not understanding the problem. So what’s different about software?
Cynically, there is no incentive for security; there is ONLY profit. Security comes into play only where it can increase profit, it's a second order effect. (Of course, there are legal and regulatory pressures here for it as well.)
until people like cult of dead cow started to both sell the solutions and give it the tools to exploit everyone not implementing the solutions.
today things like dmca actually protect the malicious incompetent and business which don't take on it are fools.
I'm an application security engineer. I find that it depends widely on the company. You're right that compliance is purely just a checklist and does and doesn't actually do much for security. At best, it slows down a determined internal attacker. ie, a developer can't install a back door since code reviews are enforced by SCM before merging is allowed. But all the ISO-27001 and SOC-2 audits in the world won't prevent trivial attacks like SQL injection.
So the actual security depends on how much buy-in the AppSec team can get from project management. I've had companies where I point out an obviously exploitable flaw that can easily cause DoS, and with some determination could get RCE, and I get radio silence. Others, I point out a flaw where I say "It's incredibly unlikely to be exploitable, and attempts to exploit would require millions of requests that would raise alarms, but if someone is determined enough..." and project management immediately assigned the ticket and it was fixed within a week.
I can tell you one thing that's not doing any favors is overly zealous penetration testers that feel like they need to report SOMETHING so they invent something that's not an issue. For example, in one app I worked on, after logging in, the browser would make an API call to get information about the current user, including it's role. The pentester used Burp Suite to alter the response to the call to change the role to "admin", and sure enough, the web page would show the user role as "admin", and so the pentester reported this as a privilege escalation. They clearly didn't go on to the next step of trying to do something as admin, though, because if they did, they'd see the backend still enforces proper RBAC. Changing that role to "admin" essentially just made all the disabled buttons/functionality in the web app light up, but trying to do anything would throw 403 Forbidden.
But I digress...
> The misaligned incentives between security and profit, especially in public companies, is not really a fixable problem without a massive cultural shift.
The EU seems to have figured it out, but the USA is a hypercapitalist hell-hole. It's such a shame that the population is mostly convinced that any regulation is bad and an attack on freedom. I roll my eyes at the Libertarians that claim that the Free Market(tm) will punish bad actors while the worst actors are rising to the top. Bad acting is profitable.
We run an SaaS and we get a ton of these. Most of these are absolutely inane and complete waste of our time having to look through their poorly written email begging for 50/100 USD payouts.
We pejoratively refer to them as "Burp Babies", the equivalent of "script kiddies".
It can mean anything from concealing slightly embarrassing stuff, to massive, systemic, deliberate, fraud; sometimes, the whole spectrum, over time.
It often seems to encourage a basic corrosion of Integrity and Ethics, at a fundamental cultural level.
When leaders say "Make Security|Quality a priority," but don't actually incentivize it, they set the stage.
For example, routinely (as in what is done every day) rewarding or punishing, based on monetary targets, vs. punishing one or two low-level people, every now and then (when caught), says it all. They are serious about money, and not serious at all, about Security|Quality.
If you want to meet a goal, you need to incentivize it. Carrots work better than sticks. Sales people get a lot of stress, and can get fired easily, but they can also make a great deal of money, if they succeed. Security people don't get fired, if they succeed, and get fired, if they don't. Often, the result of good work is ... nothing ... No breaches, no disasters, no drama. Hard to measure, as well. How to quantify an absence?
Sales: Lots of carrot, and the same stick as everyone else gets. Easy to measure, too.
Security: No carrot. All stick. The stick can be a really big stick, too; with nails driven through it.
I'm really not sure what the answer is, but it's cultural, and cultural change is always the most difficult thing to change.
I don't think you want sales concerned about security, their focus should and only be on growth. The problem is if you don't give jurisdiction and power to the other side to actually say no this priority (security fix) goes in before work is done on this new feature, then you have an imbalanced system.
If the project manager who is incentivized toward growth is the decision-maker for deciding what is prioritized, well of course naturally you'll have the PM choosing growth over security.
Process needs fixing, give more agency and jurisdiction to the other side to effect change. It's not like security doesn't see what the issues are, it's just the fixes are not prioritized and the culture and process isn't balanced between both.
There's pretty significant incentives on the government's side (or at least the individual decisionmaker's career) to also see the deal go through.
Both sides want the deal to go through, both sides have motive to hide flaws unless end users will find out before they retire.
Satya's model of making security a priority at Microsoft:
- Cram ads in every nook and corner of Windows. Left, right, centre, back, front, everywhere. What else is an operating system for?
- Install a recorder which records everything you do. For the benefit of users of course - you know, what if a user missed an ad and wants to go back and see what they missed.
- Send a mail to your employees and tell them "Do security". Mission accomplished - Microsoft is now the most secure platform.
That event really drove home for me the fact that all of the trainings, emails, processes, etc. are mostly plausible deniability. There are people who care about security at MS. I know, I've met them, but for the most part all of this exists so that Satya can plausibly say in court or in front of congress, "well we told them to do security better. This is clearly the fault of product teams or individual contributors, not Microsoft policy and incentives."
Edit: I just want to add that there are things to be cynical about - I’m not completely naive. If it’s your legal department heading up the training then you can be pretty sure that there was a cause for it.
Leadership at Microsoft is different today than when the process of Microsoft's security maturation took place, but I'll note that through that whole time nerd message boards relentless accused them of being performative and naive about security.
There's a liability avoidance component to trainings, but mostly for non-business misconduct. For example, for sexual harassment, the company will say they tried everything they could to explain to employees that this is not OK, and the perpetrator alone should be financially liable for what happened. That defense is a lot less useful in business dealings where the company benefits, though.
Deleted Comment
Words in an email hold absolutely no weight, when leaders choose to trade security for something else that's all employees need to know.
People who love to geek out about computers can of course install Arch or Gentoo or NixOS Minimal and then audit the packages that they're installing to see that there's no obvious security violations, but it's unrealistic to think that most non-software-engineer people are going to do that.
I really don't know how to fix this problem; there will always be an incentive for Microsoft (and every other company) to plaster as many ads as they think that can get away with, as well as collecting as much data as possible. I don't know that I would support regulation on this, but I don't know what else could be done.
It was also other way around with Microsoft. If you deploy Ubuntu VM in Azure, they contacted you in LinkedIn to offer commercial support.
Not joking: https://www.theregister.com/2021/02/11/microsoft_azure_ubunt...
It's a fantasy to think that random devs can audit kernel/security code. No single person can. Too many lines of code to audit (that you didn't write yourself). Even if you hired a team, by the time the team does the audit, the goalposts have moved with new source code.
> I really don't know how to fix this problem
Stop using software made by companies that do bad things. Improve the software that doesn't.
Unrelated, and maybe this actually exists, but with the rise of LED billboards, there have been more than one occasion where a billboard was displaying something and it cycled too fast, or the print was too small.
I would actually be interested in visiting the billboards website that lets me click on the geographical billboard location and show me what it’s been showing.
I have meetings with adtech guys and this gets pitched every time. Along with "a way to save ads so you can watch them again at home later!" And "alexa enable ads that you can talk to!"
- I do not see ads in “every nook and corner of Windows” and neither do you.
- I do not have a recorder installed on my Windows machines and neither do you.
- no one qualified to make that statement has said that Microsoft is the most secure platform.
It is so hard to listen to anyone who exaggerates at this level. If anything, it drives interest in Microsoft because these are all obviously false statements and some readers will wonder what your true motive is. You just raise suspicion in yourself.
At least you used a new account to distance yourself from any other identities you may have here. In fact I would say that was the only smart move in your entire comment.
Anyway, this is a damning revelation by the whistleblower and I hope Microsoft feels a good amount of pain because of it. NEVER make any decision with money as your sole input. It will always be a bad decision, and it’s just a matter of time until that decision bites you or someone you care about.
> - I do not see ads in “every nook and corner of Windows” and neither do you.
As a professional "Windows user" logging 8+ hours a day on my PC, I see no ads. Unless you count "OneDrive" ads which in that case, would mean I see iCloud ads on my iPhone too. I'm fine with classifying these as ads, but I'm certainly not seeing them "in every nook".
Are these ads only bundled with a certain versions of Windows?
Disclaimer: I do not work for Microsoft or Apple.
If you pay and promote people for features, and don't reward security culture, people are not dumb: they and the management layers will optimize for that.
I don't know how to design incentives to solve for this, but this is always going to be the way it is.
It's law, regulation and liability.
Until heads roll, until someone is punished, likely nothing will happen.
There's no easy solution, because it's inherently very difficult problem - making a correct trade-off between security and everything else for the society, and determining what exact line needs to be drawn, are inherently extremely difficult problem, and no amount of laws and punishments will help with finding the right balance.
I do like what CISA seems to be trying to do, and I think they can do a lot more here - I think we need CSRB or some similar org to get to a place where NTSB is - I think the key value of NTSB for humanity is ensuring that some of the critical knowledge around safety incidents get accumulated and shared across. Right now, learnings from key infosec incidents are not broadly shared in any reasonable timeframe, if ever, and so we repeat the mistake over and over again.
Usually, a feature is included in a product if the marketing show that it will grow the business more than the cost of the feature. Maybe we can try the same idea ?
"We identified this vulnerability, and it will impact X % of our customer and Y % will leave (+ reputation damage) so we will loose BIGNUMBER $. However, we can correct it for SMALLNUMBER $ in Z days. Decision ?"
Advertising something as "secure" SHOULD be seen as silly as advertising it as "doesn't crash". But we're not ready for that, I guess.
> In the months and years following the SolarWinds attack, Microsoft took a number of actions to mitigate the SAML risk. One of them was a way to efficiently detect fallout from such a hack. The advancement, however, was available only as part of a paid add-on product known as Sentinel.
So you sell me a submarine with screen doors, avoid fixing it for years, cripple internal processes that would fix it, and then you want to charge me for a water alarm? That's chutzpah.
Also identification is one thing, but good security should mean the vulnerability didn't occur in the first place.
Then you also need to get budget for identifying vulnerabilities.
After that you need budget to research how costly the vulnerability could be.
But before getting those budgets you need budget again to propose all of that and data to prove its value.
Unless you use your own time to do all of that or accidentally stumble upon something.
I think the only realistic way to get any sort of budget is if a deep enough incident actually happens. And this will only last maybe for a year until most of the decisionmakers have been rotated with new ones wanting to only deliver again.
Your complete system design and other features should be based on the idea of ”security first”, if you really want to build secure systems.
Deleted Comment
...years and years later
"Disabling seamless SSO would have widespread and unique consequences for government employees, who relied on physical “smart cards” to log onto their devices. Required by federal rules, the cards generated random passwords each time employees signed on. Due to the configuration of the underlying technology, though, removing seamless SSO would mean users could not access the cloud through their smart cards. To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards."
The U.S. Government (USG) is one of MSFT's largest (if not the largest) customers. The user base is enormous, and the AD footprint equally so. I have experience working in this space; the user and roles management is a nightmare with comprimised credentials, locked out accounts, and the like. Given the nature of their work, it's a constant target.
The USG has been attempting to move everyone to smart card auth to help mitigate some of these issues. Removing passwords and turning everyone to two-factor auth would greatly reduce their attack surface. They've been pursuing this for years.
So along comes this guy, and he says that, as part of this fix, just tell all of their customers to turn this off.
I don't dispute the danger of the original SAML flaw. But I think Harris is unfairly judging the rest of MSFT's reaction here. He's asking them to turn off two-factor auth across entire agencies. I might as well hand an attacker a set of credentials because that's the amount of effort and time they would need to phish a set off someone.
To reiterate, the flaw in AD FS was bad and needed immeditate attention. But the short term mitigation Harris proposes would drastically hurt their security and open tons of customers to attacks of the very sort they were trying to prevent. This story is spun as another instance of a company not caring about security, but I see a "whistleblower" who had a very narrow view of their customers overall security posture, and threw a fit when this was pointed out to him.
"To access services or data on the cloud, they would have to sign in a second time and would not be able to use the mandated smart cards.
Harris said Morowczynski rejected his idea, saying it wasn’t a viable option."
I would fully expect most government agency Info Sec Systems Managers (ISSMs) to say the same.
Which... is exactly the articles point. They knew there was no secure way to administer it, and yet sold it anyway.
> Bill Gates in 2002: "So now, when we face a choice between adding features and resolving security issues, we need to choose security."
https://www.wired.com/2002/01/bill-gates-trustworthy-computi...
> Satya Nadella in 2024: "If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security."
https://www.theverge.com/24148033/satya-nadella-microsoft-se...
I don't know of any company that has profit in their slogan, or in the core values statement, etc.
from day one everyone knew they were fsb pupets, and people are still giving them money.
I’m not talking about regulation per se, but holding humans in charge of such corps more accountable.
Why don't we have a public maps system, or a content sharing platform? Services like google maps/search or youtube by now are part of the infrastructure of our society.
The same way as roads/railways or energy production are publicly owned in many countries the same should happen for digital services. In good parts of Europe railways are publicly built and maintained while the trains are privately owned.
Google Trust Services
Disclaimer: I've worked in both of these :)
* go bankrupt because we can't be secure
* be less secure and stay in business
...guess which one will almost always win.
Microsoft of course, as a multi-trillion-dollar company has no such threat and there's no reasonable excuse for this.
What is different in our industry that companies (and managers) get away with such malice?
Unless enough lives are lost, people generally don't care that much it seems.
[1]: https://www.nrk.no/innlandet/statens-vegvesen-legg-fram-rapp...
If the public is expected to keep government and corporstions in check but the public doesn't care, it can only end poorly.
Maybe they proudly stated knowing the risks, and while unfortunate, risks became reality. And then everything is fine.
>What is different in our industry that companies (and managers) get away with such malice?
Software isn't immediately life threatening. That's why it's all thr wild west outside of medical and aerospace. While it sucks to have PI leaked to the internet, you do have time to at least take action compared to a door in an airplane coming off.
being a boeing whistleblower is though
Lack of professional licensure that binds you to state regulation with jail time as one of the stated punishments besides financial liability.
Heh, the government could start effecting change by mandating licensure and sign-offs by licensed individuals when contracting for software products sold to the government.
(There was definitely something like that with the Mottarone cable car that had been running for years with the safety catch disabled. When the tow-rope snapped, wiht no catch, the cabin rushed down and killed everyone on board.)
Deleted Comment