The author doesn’t really try to identify from the regulator’s perspective, and it weakens their argument.
To the state, one of the most important goals is legibility. The state has an enormous burden of regulation in its various enterprises, and the less variance the easier the job. This certainly isn’t ideal in many, many cases, but in this case a person who likely didn’t make the standards quite rationally trusts a vetted standard over one small trivial test given to them by a company they’re evaluating. FIPs standards are probably much more exhaustive than just a hashing performance test on one piece of hardware.
Yes, it's bad when bureaucracy means that individual bad decisions are made. But casually saying "oh, well, this is idiocy, the government doesn't care about security" ignores the fact that organizing the regulation of human behavior is immensely costly, and the government needs to optimize on those costs.
A good dev analogy is using a library function vs rolling your own. The government's library is the list of things that its own engineers approved. The equivalent of rolling their own is for some individual auditor to make an independent decision to deviate from the list. Now, it might be that you can squeeze out an extra tiny bit of performance[/the government can squeeze out some extra security] by rolling your own sort function or whatever, but most wise devs will balance that off against the risk of introducing extra bugs, the extra writing and maintenance time involved, etc.
Reality is that this sucks. It would be amazing if it wasn't actually super-super costly to organize human beings to all point in the same direction and make sure that people don't cheat. Government would be a lot cheaper and more efficient. But, alas, reality does suck.
I feel bad for the author for publishing this article.
If your (the author) mindset is that "everyone else is an idiot", be it management, or regulators, auditors or some other group that is not your group (engineer?), then unfortunately you have an attitude problem.
If furthermore you maliciously misinterpret the other group, saying "they want us to be compliant, not secure", based on misunderstanding the domain expertise _of your own group_ (!), then you are a liability to your business and should be kept at arms length from external auditors.
Because the author clearly has no clue what he is doing, and based on his technical shortcomings he is speaking down on others who are _actually_ doing their job correctly: the auditors.
Why is the author comparing bcrypt using 32 iterations with sha512crypt with 5000 iterations? Why not increase the sha512crypt iteration count by orders of magnitude if he want the hashcat test to fair equal or better under the testing methodology in use? Why is he using hashcat and basing decisions solely on that tool? Why is he using shacrypt at all and not a a proper KDF, such as PDKDF2, plugging in SHA2?
Yeah - utterly agree. The point raised in the article is valid - but the correct thing to do isn't to persuade the auditor they are wrong, it's to push FIPS to include bcrypt instead of SHA2 (although as below, this isn't the right answer either - it should use PBKDF2 or similar).
I think your analogy misses the most important benefit: you must use the government's library. If people were allowed to choose other libraries, then they might choose wrong. And since the government protects companies from the consequences of making bad choices, the government can't let them make those choices.
The great failing of so many here is that they don't recognize that uniformity and centralization are more important than innovation. If those innovations really are better, then you can vote to elect representatives to nominate regulators to eventually change the requirements. This is how things work in a civilized society.
It is a frustrating end result in this case. I wonder if there's any possibility for escalating this to a regulator's technical resources? I wonder if this kind of thing is typically left to the auditor discretion.
The regulator believing "this system isn't legible to us and so adds burden to our work" is roughly equivalent to the author's conclusion: "They want us to be compliant, not secure".
The correct response to all of this would have been an accepted means for individual departments to submit requests for updates/changes to the standard to be voted on before they are forced to comply. The issue at hand is no mechanism for feedback. You can only comply or leave. This is the consistent failure of top-down governance.
> The correct response to all of this would have been an accepted means for individual departments to submit requests for updates/changes to the standard to be voted on before they are forced to comply
This would result in, what's pejoratively called, "design by committee" which inevitably comes with bike-shedding, tit-for-tat politics and so on.
The NIST is an public institution with a long history which traces back to the origins of the United States. It's a formal part of the state. And it's raison d'être is to ensure that decision making around standardization actually happens. [1]
The point of all of this harkens back to fundamental principle:
If you want to travel fast, go alone. If you want to travel far, go with many.
The organization of a state catering to many millions follows the latter part of that principle, for better or worst. Standardization doesn't guarantee the optimal solution to a problem, it guarantees a workable solution for the majority of a given problem space.
And that's exactly what public governance is about. Ensuring the continued existence of the state as a legitimate, legal entity that governs and serves public interests at large.
> The issue at hand is no mechanism for feedback.
Well, it's perfectly feasible to... just e-mail them or give them a call. The NIST ITL CSRC lab can easily be reached via their website. [2]
If you want to know the decision making process that went into a standard, you can submit a FOIA request. [3]
I get it, the gov has a hard problem to solve. It's a case of both of them being right - the gov needs a common level of gradable security applied, and these folks are in an uncommon position of being able to test and prove that their choice is more secure despite not being a graded choice that can be approved. The auditors are not subject matter experts, they work for bottom dollar to ensure fiscal efficiency and only check boxes. The best choice is shouted down in the name of efficiency.
This is also how smart folks will nearly always be chased out of government work. It's always safer and less work to make the dumb safe choice rather than the smarter choice that costs less or works better. This explains most of the big problems we have in government. We're not allowed to make smart choices ever, only mediocre choices at best, worst choices most of the time.
How long can we sustain this line of behavior though? We can do it for quite a while yet, but eventually the overhead will kill the state in multiple ways. Of always making mediocre decisions in order to guarantee mediocre results.
Also, the core technical argument - that hashcat proves crypt is better than SHA2 - is weak. I could write a hashing function that looks good if you throw hashcat at it, but I wouldn't recommend you use it.
I believe bcrypt is a better choice, but if I didn't I wouldn't have been persuaded.
> The author doesn’t really try to identify from the regulator’s perspective
The regulator qua regulator is remote from the narrative, which deals only with the auditors. Who don't even work for the same agency.
The agency issuing the rules is concerned with compliance in a way in which actual security is not a factor which mitigates or alters, which is the conclusion made, and is entirely factually correct.
The whole story is that important decisions are made extremely remotely (in all of time, organizational structure, domain, and concern) from the project.
Now if this was an argument for a specific alternative, or even just generally against the practice without an alternative replacement, it would need to go farther and address whether the approach deals with different, potentially even more serious problems (corruption and/or ineptitude in over-empowered public agency managers, for instance) whether it was a net positive of negative in that context and whether, if it was a net positive, either the specific alternative proposed (if any) or any conceivable alternative would likely be a greater net positive.
But that's not what this piece seems to be trying to do.
> To the state, one of the most important goals is legibility.
You're making the argument from the book Seeing Like a State. But the thrust of that book is that those practices lead to bureaucratic catastrophes like so-called "scientific forestry" -- which is exactly the author's point here in the context of security.
I can't agree. This goes to show how bureaucracy often gets in the way of better solutions. While I'm sure he can understand why compliance makes the government's job easier, it didn't make HIS job easier, so he had a valid concern. He had to bend to the government's will because they paid the bills however. Government is a lot of things but it's often the lowest common denominator solution.
The product I work on is geared towards big corporate IT environments, and I can confirm that this sort of thing is not unusual at all.
A recent support ticket went along the lines of:
Customer: An audit discovered that JDK version X was installed as part of your software. It has a vulnerability and we demand a way to upgrade to JDK X+1 that has the fix.
Our support team: We're already aware of that and the latest point release of our software bundles JDK X+2, which fixes that vulnerability and 2 others. Please upgrade.
Customer: Our compliance team requires JDK X+1. Please provide a way to install this version.
We eventually solved the problem by having them upgrade to the latest major release of our software, which doesn't use Java at all, but it boggles my mind that they wanted a _less_ secure JDK.
After years of being beaten by customers with stories like these, I learnt to treat InfoSec and Compliance teams as finite state machines, particularly at banks and other financial institutions. Learn not to question the sacred spreadsheet, or debate the merits of a request. It's pointless, and you keep rolling your eyes will only end up with you at the optometrist.
Instead, treat compliance like part of your API. Ensure your product delivers on the expected answer, while continuously improving the security of your products in the parts that are not directly visible.
However DO get in writing that the option was offered to them for possible future court battles so that the onus was on them for failed security damages.
Maybe JDK X+1 had gone through a deep and thorough review at some point that got it put on some "OK" list somewhere? And maybe X+2 was too new to have made it through that same deep and thorough review. It makes sense from an auditor's perspective, maybe X+2 has new bugs that X+1 didn't have. They want the good version, not the newest version.
OP's story and the article's author are kind of missing the point. These are both simple stories of a vendor failing to meet a [presumably] written requirement: The customer, or regulator, required X, and vendor decided instead to provide Y, and then were dumbfounded when that was deemed unacceptable. OP's vendor went farther, offering Z instead, and the customer again reminded them that X was required. It doesn't really matter if there are better alternatives than X. Those alternatives are not part of the requirement.
Whether Y=X-1 or Z=X+1 is irrelevant. Customer requires X, you provide X or they'll find another software vendor.
"but it boggles my mind that they wanted a _less_ secure JDK."
This should not be hard to understand at all.
A lot of things may have changed in those revs, beyond the 'extra patches - those affect systems. It's likely the software was not approved in the new operating environment.
You can just go ahead and use Java 11 on software designed for Java 8, there may be issues.
The 'patch' that go into one rev is what they want - not more.
Most individuals are not qualified to make versioning decisions.
The larger the company, the more at stake.
It's possible it was due to bureaucratic numbness, and the agent probably should have maybe 'checked harder' about the new versions, but wanting a specific version is reasonable depending on the circumstances.
Our Support Team: We will be happy to comply with your request once you sign this liability release form indicating that you want the less-secure X+1 update, and not the more secure X+2 update which we recommend.
I know this is presented as a sort of ridiculous "government bureaucracy" story, but anecdotally, a non-trivial chunk of the security/compliance industry is built on compliance over security. Not all, of course, but enough that I think it's a big problem.
I've literally had auditors ask me whether I'm actually interested in security or whether I just need the sign-off ("collect dust in a corner").
Not that this is unique to security. Similar things definitely happen with accounting or any industry where you pay the people who audit you (and probably even ones where you don't but the auditors still have an interest in not pissing you off, eg a government entity where officials may want to work for your company some day).
I have worked with a large healthcare firm at two different agencies now who outsources compliance checks to overseas teams. There is nothing but a lengthy checklist to move through and if you do not meet the standards of the checklist, you fail.
There is no critical thought or evaluation of each step; it’s a simple pass/fail. Example would be “are your drives encrypted at rest?” It doesn’t matter if you’re in a SOC3 facility, located 25 feet below street level in 8 foot thick concrete walls and your files are distributed in pieces across millions of drives throughout the data center. Nope. The drives must be encrypted at rest. Pass. Fail.
You seem to have a problem with the standard that says SOC3 facilities should have encrypted hard drives, not the auditor who is actually trying to enforce that standard. You take it up with SOC3, not the auditor. :-)
You're also assuming the auditors are familiar with all sorts of technology and security mumbo-jumbo. In my experience, they typically are not - their skillset is to "audit", not make sense of latest 10000 rounds of mybestcrypto.
Today they may be auditing a SOC3 facility, tomorrow it'll be a car manufacturing plant. The only "source of truth" is the standard they carry, and any deviation has to be noted. It is as simple as that!
Many people need to fail, including the author, for such a situation to arise.
The author failed to understand the mission of the business and how non-compliant technology posed a risk to it.
The auditors failed to understand the developer, and educate them on the compromises needed to remain compliant. They
also failed to engage in a discussion around alternatives.
Everyone failed to actually care enough to solve this problem, but the author still found time for a snarky misinformed article.
The vast majority of grating security/compliance tales come from a similar place of ignorance, apathy and snark. The people are the problem, and the misperception that they don't need to own these requirements for the business and are just in a hurry to check a box (on all sides).
Agreed. I generally encounter these kinds of complaints from safety-compliance perspectives. At the core is a lack of alignment and collective ownership of the necessary final outcome.
My overriding impression has always been that at the high level, both safety inspectors and the people in the labs want the same thing: a safe working environment and the ability to get science done. There is always some tension there (scientists, especially young scientists, are sometimes willing to accept more risk to get science done, as it is their time, and hence lives, being gradually consumed by additional measures to improve safety), but there is general agreement that safe working conditions are a huge net benefit.
It is difficult for the workers in the labs to place themselves in the shoes of the safety inspectors -- what seems like paperwork is actually a surface check to see if you are organized. If you can't explain your safety procedures to someone versed in scientific safety, how can you possibly explain them to an untrained worker/student?
The one thing that can frustrate the entire process, which I suspect is at the root of most university troubles, is a lack of goal-alignment between departments within the organization. If safety inspectors don't feel ownership of getting quality science out the door and only want to reach internal safety targets and scientists are only interested in holding their small lab's accident rate to zero, rather than that of the much-larger university as a whole, nobody's goals will ever get met.
When things are going right, inspections are a chance to show off how awesome your systems are and an opportunity to improve.
> My overriding impression has always been that at the high level, both safety inspectors and the people in the labs want the same thing: a safe working environment and the ability to get science done.
The interests of workers, safety auditors and institutions only partly intersect - like a venn diagram.
All parties can (hopefully) agree they don't want anyone killed or seriously injured.
But only some parties are interested in shielding the institution from liability.
Only some parties are interested in bright-line rules that choose rule simplicity over rule accuracy.
Only some parties are interested in stopping the institution from skimping on safety equipment to save money.
Only some parties are interested in seeing workers respected as masters of their crafts, able to use their own judgement.
Only some parties benefit from producing the portion of compliance documentation that nobody will ever refer to.
And only some parties are interested in work getting done in a timely manner.
The people are the problem, and the misperception that they don't need to own these requirements for the business and are just in a hurry to check a box (on all sides).
Agreed. I think the framing of compliance in most companies helps reinforce this mindset. It comes from a place of compliance as "avoiding downside in the form of fines or gov't censure", not compliance as "a set of standards we abide by, that provides upsides to all of our customers by virtue of our following them".
I can confirm this. We had to install virus scanners on our self-driving car Linux boxes which are disconnected from any networks...
Fun part is that the AV scanner sometimes takes so much CPU time that the pedestrian detection algorithm fails and the car has an increased chances hitting them.
This is insane, and should be considered criminally negligent.
Linux is not a high assurance RTOS. It does not have adequate reliability nor can it provide any guarantee of hard realtime, and thus it should not be found anywhere near a "pedestrian detection algorithm" that's supposed to protect a car from hitting pedestrians.
There's proper operating systems[0] for this sort of scenario.
You're thinking about this in terms of deadlines rather than trade offs.
If you build a million cars, they're going to hit a certain number of pedestrians, statistically. Literally zero is the ideal but not necessarily achievable.
If you spend more computation on pedestrian detection, it will do better. If you have to spend computation on useless antivirus, it can't be spent on pedestrian detection, or some other thing that improves safety. And pedestrian detection is itself a trade off -- one algorithm might be more accurate but slower, and so give the vehicle less time to respond after detection. Using a RTOS doesn't save you -- if the CPU isn't fast enough to run both the algorithm and the antivirus then it could have to starve the antivirus of resources indefinitely, which might not be compliant. So then the presence of the antivirus requires you to use an algorithm which is faster but less accurate.
You could also use a faster processor, but that's still a trade off. It could increase the cost of the vehicle and cause some people to continue to use vehicles that are less expensive and less safe, leading to an overall cost in lives.
Any time you're making a trade off where one of the variables is human lives, any inefficiency that requires you to make the trade off in a worse way is potentially costing lives. And installing antivirus where it doesn't belong is an inefficiency.
Gotta call BS on OP here. Anything time sensitive like self-driving cars absolutely has to be built on a real-time operating system. If you’re in the U.S. there are Dept of Transportation requirements to even be allowed to test drive the thing on any road surface other than your own driveway.
OP might mean "When I was a student working on a self driving car student project, which we mostly tested in simulation, occasionally on private land with a lot of extra safety precautions, and never on public roads"
That seems like a serious engineering ethics problem that needs to be escalated to the highest level possible and if that doesn’t work, then leaked to the media.
That's the moment where you have to sue, sabotage the auditors, enable politicians or go to the press. There has to be a line in the sand, and that's when clueless bureaucracy like that endangers life. When (not if) that car kill someone it's not only on the auditor, it's also on you ("you" as in "the people complying").
Impressive how professional negligence is the "fun part" for you. If it ever goes wrong, I hope someone goes to jail for that. (yes, stupid requirements suck. but if they actually impact things that matter, "fun" is not the appropriate response)
I highly suspect it is. Every audit I know of allows for mitigating controls. Having a system properly air-gapped would allow a system to be run without antivirus. I doubt many auditors would require antivirus on network switches.
This sounds strange because safety systems are usually hard realtime. I can't imagine those folks tolerating something that can randomly decide to eat time slices as it pleases.
not sure about this one. Viruses dont need a network or internet connection to spread. They used to spread just fine via floppy disks.
EDIT:
Obviously the fact that the cpu cant handle virus scan + pedestrian detection at the same time is shockingly bad.
...
But a self driving car with a virus that could cause it to do potentially anything is even worse.
Why not just run the scans when the car is not in motion, or when charging?
Is it that unreasonable for them to want people to use FIPS approved algorithms? I mean all you're showing them is how fast the current implementations of the algorithms are, and obviously being too fast isn't great for password hashing, but your benchmark doesn't in itself actually prove the security of bcrypt as a hash algorithm so I don't know why you would expect that to convince them.
Maybe the real problem is that the FIPS standards are too conservative?
> Conservative would be one thing. Stubbornly out of date would be another.
Indeed. Also often impractical.
A way to solve this is to support both a regular mode of operation and a FIPS mode of operation. I've worked on multiple enterprise products at different companies that take this approach. The full-on FIPS compliant mode is there to check all the boxes for customers who need that for their auditors. Even withing the set of those customers, a majority don't actually run the product in FIPS mode because it's too limited.
Sha-2 isn’t appropriate as a password hashing algorithm so there’s something wrong with the compliance team and the engineers not correcting them that they need a KDF. Apparently NIST even calls out regular hash functions as being unsuitable for hashing passwords [1]. You’d need to use PBKDF2 although maybe this advice has been updated since?
Maybe they were being recommended PBKDF2 because that too is more susceptible to GPU attacks? Really should be using scrypt rather than bcrypt anyway.
All that being said, using sha2 isn’t necessarily “wrong” if you can force your users to use longer passwords. My understanding (could be wrong - I’m not a security algorithms expert) is that all these “hard” hashing algorithms are trying to provide guarantees even if your user has poor security hygiene. If you can force your user to use a randomly generated 20 digit password then you can significantly reduce your server load and reduce latency by using a faster hashing algorithm.
The real problem is that security decisions are being made by bureaucrats. Full stop.
Bureaucracy invariably requires compliance, and only compliance. Some problems can be cast in terms of compliance and thus solved in a bureaucracy, but security is not and never will be one them.
Are you saying the auditors are the bureaucracy or the FIPS standards approvers are the bureaucracy? All the auditors are saying is that you have to use a FIPS approved algorithm. The FIPS people have real cryptography experts who research this stuff on a daily basis that also help set the security standards for the entire U.S. government. The author is coming along here saying they are using something else that’s not approved by FIPS (for good reason I might add) and they’re mad the auditors won’t let them simply because “everyone else is using bcrypt”.
Bureaucrats track compliance. People who know security and bureaucracy (ideally) design the standards. Engineers are still responsible for building secure systems that happen to also check all the compliance boxes.
Compliance is mostly intended to prevent really hideous errors from falling through the cracks. Nobody believes that you can't build a compliant, insecure system, or that you can't build a secure, non-compliant system.
The author’s assertions are misleading at best, outright false at worst. Their testing is inherently flawed, and they’re misinterpreting the output from hashcat. Although their choice of bcrypt is a good one, they clearly don’t understand how to actually evaluate different algorithms, and I commend the auditor for not allowing them to do so.
The author’s process doesn’t prove that bcrypt is more secure than whatever SHA2-based alternative was being proposed (from the example, seemingly sha512crypt). It simply proves that the number of rounds they chose for sha512crypt didn’t match the timing factor they chose for bcrypt. That’s just dumb.
I could just as easily provide a counter-example by stacking the odds in my favor. The time it takes to brute force a bcrypt or sha512crypt hash is configurable when generating the hash; I could just as easily choose options that appear to support sha512crypt being more secure.
What matters here is that the company wanted to use an algorithm that wasn’t requested by their customer. Their customer had a detailed document explaining which algorithms they would prefer. Although bcrypt is generally considered top notch security, vulnerabilities have certainly be found in various implementations over time.[0] This company’s customer—the US government—wanted something they had personally vetted and approved, which is understandable. Even if you could prove that one algorithm is slower than another, that doesn’t necessarily mean it’s more secure; it’s just more resistant to brute force attacks.
Furthermore, the author says “SHA2-based” without elaborating, causing several HN commenters to assume raw SHA2 was used here. However, the author’s hashcat example shows sha512crypt. That means it’s not raw SHA2; it’s been adapted to be made proper for password hashing, including salting and multiple rounds. It’s the same as calling bcrypt “Blowfish-based:” yes, it’s true, but it’s somewhat misleading if you completely omit any mention of bcrypt. Raw Blowfish should never be used for password storage; it isn’t designed for that, much like SHA2.
While I don't approve, I suspect the auditors were in a difficult position: allow bcrypt, and if it blows up down the road, suffer consequences; or insist on that which will never get them in trouble, even if it is demonstrably inferior.
In 1998, I literally got fired for buying IBM (actual IBM) over Apple. I don't think that me telling people this in the last 20 years has done much to counter this particular meme.
To the state, one of the most important goals is legibility. The state has an enormous burden of regulation in its various enterprises, and the less variance the easier the job. This certainly isn’t ideal in many, many cases, but in this case a person who likely didn’t make the standards quite rationally trusts a vetted standard over one small trivial test given to them by a company they’re evaluating. FIPs standards are probably much more exhaustive than just a hashing performance test on one piece of hardware.
Yes, it's bad when bureaucracy means that individual bad decisions are made. But casually saying "oh, well, this is idiocy, the government doesn't care about security" ignores the fact that organizing the regulation of human behavior is immensely costly, and the government needs to optimize on those costs.
A good dev analogy is using a library function vs rolling your own. The government's library is the list of things that its own engineers approved. The equivalent of rolling their own is for some individual auditor to make an independent decision to deviate from the list. Now, it might be that you can squeeze out an extra tiny bit of performance[/the government can squeeze out some extra security] by rolling your own sort function or whatever, but most wise devs will balance that off against the risk of introducing extra bugs, the extra writing and maintenance time involved, etc.
Reality is that this sucks. It would be amazing if it wasn't actually super-super costly to organize human beings to all point in the same direction and make sure that people don't cheat. Government would be a lot cheaper and more efficient. But, alas, reality does suck.
If your (the author) mindset is that "everyone else is an idiot", be it management, or regulators, auditors or some other group that is not your group (engineer?), then unfortunately you have an attitude problem.
If furthermore you maliciously misinterpret the other group, saying "they want us to be compliant, not secure", based on misunderstanding the domain expertise _of your own group_ (!), then you are a liability to your business and should be kept at arms length from external auditors.
Because the author clearly has no clue what he is doing, and based on his technical shortcomings he is speaking down on others who are _actually_ doing their job correctly: the auditors.
Why is the author comparing bcrypt using 32 iterations with sha512crypt with 5000 iterations? Why not increase the sha512crypt iteration count by orders of magnitude if he want the hashcat test to fair equal or better under the testing methodology in use? Why is he using hashcat and basing decisions solely on that tool? Why is he using shacrypt at all and not a a proper KDF, such as PDKDF2, plugging in SHA2?
The great failing of so many here is that they don't recognize that uniformity and centralization are more important than innovation. If those innovations really are better, then you can vote to elect representatives to nominate regulators to eventually change the requirements. This is how things work in a civilized society.
It is a frustrating end result in this case. I wonder if there's any possibility for escalating this to a regulator's technical resources? I wonder if this kind of thing is typically left to the auditor discretion.
The correct response to all of this would have been an accepted means for individual departments to submit requests for updates/changes to the standard to be voted on before they are forced to comply. The issue at hand is no mechanism for feedback. You can only comply or leave. This is the consistent failure of top-down governance.
This would result in, what's pejoratively called, "design by committee" which inevitably comes with bike-shedding, tit-for-tat politics and so on.
The NIST is an public institution with a long history which traces back to the origins of the United States. It's a formal part of the state. And it's raison d'être is to ensure that decision making around standardization actually happens. [1]
[1] https://en.wikipedia.org/wiki/National_Institute_of_Standard...
The point of all of this harkens back to fundamental principle:
If you want to travel fast, go alone. If you want to travel far, go with many.
The organization of a state catering to many millions follows the latter part of that principle, for better or worst. Standardization doesn't guarantee the optimal solution to a problem, it guarantees a workable solution for the majority of a given problem space.
And that's exactly what public governance is about. Ensuring the continued existence of the state as a legitimate, legal entity that governs and serves public interests at large.
> The issue at hand is no mechanism for feedback.
Well, it's perfectly feasible to... just e-mail them or give them a call. The NIST ITL CSRC lab can easily be reached via their website. [2]
If you want to know the decision making process that went into a standard, you can submit a FOIA request. [3]
[2] https://csrc.nist.gov/about/contact-us [3] https://www.nist.gov/foia
This is also how smart folks will nearly always be chased out of government work. It's always safer and less work to make the dumb safe choice rather than the smarter choice that costs less or works better. This explains most of the big problems we have in government. We're not allowed to make smart choices ever, only mediocre choices at best, worst choices most of the time.
How long can we sustain this line of behavior though? We can do it for quite a while yet, but eventually the overhead will kill the state in multiple ways. Of always making mediocre decisions in order to guarantee mediocre results.
I believe bcrypt is a better choice, but if I didn't I wouldn't have been persuaded.
The regulator qua regulator is remote from the narrative, which deals only with the auditors. Who don't even work for the same agency.
The agency issuing the rules is concerned with compliance in a way in which actual security is not a factor which mitigates or alters, which is the conclusion made, and is entirely factually correct.
The whole story is that important decisions are made extremely remotely (in all of time, organizational structure, domain, and concern) from the project.
Now if this was an argument for a specific alternative, or even just generally against the practice without an alternative replacement, it would need to go farther and address whether the approach deals with different, potentially even more serious problems (corruption and/or ineptitude in over-empowered public agency managers, for instance) whether it was a net positive of negative in that context and whether, if it was a net positive, either the specific alternative proposed (if any) or any conceivable alternative would likely be a greater net positive.
But that's not what this piece seems to be trying to do.
It's not like bcrypt is a new one-off algorithm that they invented for this project.
You're making the argument from the book Seeing Like a State. But the thrust of that book is that those practices lead to bureaucratic catastrophes like so-called "scientific forestry" -- which is exactly the author's point here in the context of security.
Dead Comment
A recent support ticket went along the lines of:
Customer: An audit discovered that JDK version X was installed as part of your software. It has a vulnerability and we demand a way to upgrade to JDK X+1 that has the fix.
Our support team: We're already aware of that and the latest point release of our software bundles JDK X+2, which fixes that vulnerability and 2 others. Please upgrade.
Customer: Our compliance team requires JDK X+1. Please provide a way to install this version.
We eventually solved the problem by having them upgrade to the latest major release of our software, which doesn't use Java at all, but it boggles my mind that they wanted a _less_ secure JDK.
Instead, treat compliance like part of your API. Ensure your product delivers on the expected answer, while continuously improving the security of your products in the parts that are not directly visible.
Whether Y=X-1 or Z=X+1 is irrelevant. Customer requires X, you provide X or they'll find another software vendor.
You argue that people should just install latest versions without thinking? This did not went well with SolarWinds case.
https://www.checkmarx.com/
This should not be hard to understand at all.
A lot of things may have changed in those revs, beyond the 'extra patches - those affect systems. It's likely the software was not approved in the new operating environment.
You can just go ahead and use Java 11 on software designed for Java 8, there may be issues.
The 'patch' that go into one rev is what they want - not more.
Most individuals are not qualified to make versioning decisions.
The larger the company, the more at stake.
It's possible it was due to bureaucratic numbness, and the agent probably should have maybe 'checked harder' about the new versions, but wanting a specific version is reasonable depending on the circumstances.
I've literally had auditors ask me whether I'm actually interested in security or whether I just need the sign-off ("collect dust in a corner").
Not that this is unique to security. Similar things definitely happen with accounting or any industry where you pay the people who audit you (and probably even ones where you don't but the auditors still have an interest in not pissing you off, eg a government entity where officials may want to work for your company some day).
There is no critical thought or evaluation of each step; it’s a simple pass/fail. Example would be “are your drives encrypted at rest?” It doesn’t matter if you’re in a SOC3 facility, located 25 feet below street level in 8 foot thick concrete walls and your files are distributed in pieces across millions of drives throughout the data center. Nope. The drives must be encrypted at rest. Pass. Fail.
Sigh.
You're also assuming the auditors are familiar with all sorts of technology and security mumbo-jumbo. In my experience, they typically are not - their skillset is to "audit", not make sense of latest 10000 rounds of mybestcrypto.
Today they may be auditing a SOC3 facility, tomorrow it'll be a car manufacturing plant. The only "source of truth" is the standard they carry, and any deviation has to be noted. It is as simple as that!
One of objectives for encryption at rest is shredding. The drives may not remain in the secure facility after the end of life.
> your files are distributed in pieces across millions of drives throughout the data center
Distributing data could actually secure it if individual pieces are meaningless.
The author failed to understand the mission of the business and how non-compliant technology posed a risk to it.
The auditors failed to understand the developer, and educate them on the compromises needed to remain compliant. They also failed to engage in a discussion around alternatives.
Everyone failed to actually care enough to solve this problem, but the author still found time for a snarky misinformed article.
The vast majority of grating security/compliance tales come from a similar place of ignorance, apathy and snark. The people are the problem, and the misperception that they don't need to own these requirements for the business and are just in a hurry to check a box (on all sides).
My overriding impression has always been that at the high level, both safety inspectors and the people in the labs want the same thing: a safe working environment and the ability to get science done. There is always some tension there (scientists, especially young scientists, are sometimes willing to accept more risk to get science done, as it is their time, and hence lives, being gradually consumed by additional measures to improve safety), but there is general agreement that safe working conditions are a huge net benefit.
It is difficult for the workers in the labs to place themselves in the shoes of the safety inspectors -- what seems like paperwork is actually a surface check to see if you are organized. If you can't explain your safety procedures to someone versed in scientific safety, how can you possibly explain them to an untrained worker/student?
The one thing that can frustrate the entire process, which I suspect is at the root of most university troubles, is a lack of goal-alignment between departments within the organization. If safety inspectors don't feel ownership of getting quality science out the door and only want to reach internal safety targets and scientists are only interested in holding their small lab's accident rate to zero, rather than that of the much-larger university as a whole, nobody's goals will ever get met.
When things are going right, inspections are a chance to show off how awesome your systems are and an opportunity to improve.
The interests of workers, safety auditors and institutions only partly intersect - like a venn diagram.
All parties can (hopefully) agree they don't want anyone killed or seriously injured.
But only some parties are interested in shielding the institution from liability.
Only some parties are interested in bright-line rules that choose rule simplicity over rule accuracy.
Only some parties are interested in stopping the institution from skimping on safety equipment to save money.
Only some parties are interested in seeing workers respected as masters of their crafts, able to use their own judgement.
Only some parties benefit from producing the portion of compliance documentation that nobody will ever refer to.
And only some parties are interested in work getting done in a timely manner.
Agreed. I think the framing of compliance in most companies helps reinforce this mindset. It comes from a place of compliance as "avoiding downside in the form of fines or gov't censure", not compliance as "a set of standards we abide by, that provides upsides to all of our customers by virtue of our following them".
Deleted Comment
Linux is not a high assurance RTOS. It does not have adequate reliability nor can it provide any guarantee of hard realtime, and thus it should not be found anywhere near a "pedestrian detection algorithm" that's supposed to protect a car from hitting pedestrians.
There's proper operating systems[0] for this sort of scenario.
[0]: https://sel4.systems/About/seL4-whitepaper.pdf
If you build a million cars, they're going to hit a certain number of pedestrians, statistically. Literally zero is the ideal but not necessarily achievable.
If you spend more computation on pedestrian detection, it will do better. If you have to spend computation on useless antivirus, it can't be spent on pedestrian detection, or some other thing that improves safety. And pedestrian detection is itself a trade off -- one algorithm might be more accurate but slower, and so give the vehicle less time to respond after detection. Using a RTOS doesn't save you -- if the CPU isn't fast enough to run both the algorithm and the antivirus then it could have to starve the antivirus of resources indefinitely, which might not be compliant. So then the presence of the antivirus requires you to use an algorithm which is faster but less accurate.
You could also use a faster processor, but that's still a trade off. It could increase the cost of the vehicle and cause some people to continue to use vehicles that are less expensive and less safe, leading to an overall cost in lives.
Any time you're making a trade off where one of the variables is human lives, any inefficiency that requires you to make the trade off in a worse way is potentially costing lives. And installing antivirus where it doesn't belong is an inefficiency.
Deleted Comment
Deleted Comment
Before you will kill or maim someone innocent.
If you must comply with this scanner then buy 16 GB RAM and SSD (or more hardware, depending on bottleneck) rather than plan to kill people.
Also, who made self-driving car not operating as a real-time system?
I've done a little work with safety-critical systems, and that's certainly a new requirement to me both in theory and practice.
I don't think a company like that can really function.
My advice, leave before it implodes.
Dead Comment
EDIT: Obviously the fact that the cpu cant handle virus scan + pedestrian detection at the same time is shockingly bad. ... But a self driving car with a virus that could cause it to do potentially anything is even worse.
Why not just run the scans when the car is not in motion, or when charging?
Auditor: This internet-connected device has no floppy drive. No antivirus needed.
Maybe the real problem is that the FIPS standards are too conservative?
Quoting a professional cryptographer: FIPS compliance means that your code/system is sufficiently mediocre.
Conservative would be one thing. Stubbornly out of date would be another.
Indeed. Also often impractical.
A way to solve this is to support both a regular mode of operation and a FIPS mode of operation. I've worked on multiple enterprise products at different companies that take this approach. The full-on FIPS compliant mode is there to check all the boxes for customers who need that for their auditors. Even withing the set of those customers, a majority don't actually run the product in FIPS mode because it's too limited.
All that being said, using sha2 isn’t necessarily “wrong” if you can force your users to use longer passwords. My understanding (could be wrong - I’m not a security algorithms expert) is that all these “hard” hashing algorithms are trying to provide guarantees even if your user has poor security hygiene. If you can force your user to use a randomly generated 20 digit password then you can significantly reduce your server load and reduce latency by using a faster hashing algorithm.
[1] https://stackoverflow.com/questions/11624372/best-practice-f...
Bureaucracy invariably requires compliance, and only compliance. Some problems can be cast in terms of compliance and thus solved in a bureaucracy, but security is not and never will be one them.
Compliance is mostly intended to prevent really hideous errors from falling through the cracks. Nobody believes that you can't build a compliant, insecure system, or that you can't build a secure, non-compliant system.
If they had required PBKDF2 that would have made more sense. That at least confirms you used the pseudorandom function correctly.
The author’s process doesn’t prove that bcrypt is more secure than whatever SHA2-based alternative was being proposed (from the example, seemingly sha512crypt). It simply proves that the number of rounds they chose for sha512crypt didn’t match the timing factor they chose for bcrypt. That’s just dumb.
I could just as easily provide a counter-example by stacking the odds in my favor. The time it takes to brute force a bcrypt or sha512crypt hash is configurable when generating the hash; I could just as easily choose options that appear to support sha512crypt being more secure.
What matters here is that the company wanted to use an algorithm that wasn’t requested by their customer. Their customer had a detailed document explaining which algorithms they would prefer. Although bcrypt is generally considered top notch security, vulnerabilities have certainly be found in various implementations over time.[0] This company’s customer—the US government—wanted something they had personally vetted and approved, which is understandable. Even if you could prove that one algorithm is slower than another, that doesn’t necessarily mean it’s more secure; it’s just more resistant to brute force attacks.
Furthermore, the author says “SHA2-based” without elaborating, causing several HN commenters to assume raw SHA2 was used here. However, the author’s hashcat example shows sha512crypt. That means it’s not raw SHA2; it’s been adapted to be made proper for password hashing, including salting and multiple rounds. It’s the same as calling bcrypt “Blowfish-based:” yes, it’s true, but it’s somewhat misleading if you completely omit any mention of bcrypt. Raw Blowfish should never be used for password storage; it isn’t designed for that, much like SHA2.
[0]: https://en.wikipedia.org/wiki/Bcrypt#Versioning_history