>after having received a lukewarm and laconic response from the HackerOne triage team.
A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.
At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.
There is definitely a miss-alignment of incentives with the bug bounty platforms. You get a very large number of useless reports which tends to create a lot of noise. Then you have to sift through a ton of noise to once in a while get a serious report. So the platforms up-sell you on using their people to sift through the reports for you. Only these people do not have the domain knowledge expertise to understand your software and dig into the vulnerabilities.
If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.
The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.
Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...
One way to correct this misalignment is to give the bounty platform a cut of the bounty. This is how Immunifi works, and I've so far not heard anyone unhappy with communicating with them (though, I of course will not be at all shocked or surprised if a billion people reply to me saying I simply haven't talked to the right people and in fact everyone hates them ;P).
The backstory here, of course, is that the overwhelming majority of reports on any HackerOne program are garbage, and that garbage definitely includes 1990s sci.crypt style amateur cryptanalyses.
IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre. And that’s assuming that the company even has a responsible disclosure program or you risk putting your ass on the line.
I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
> IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre.
Show me the incentives, and I'll show you the outcomes.
We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.
Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.
> We need Good Samaritan laws that legally protect and reward white hats.
What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?
I'm genuinely curious about how you suggest we achieve
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.
And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."
Companies get hacked because Bob in finance doesn't have MFA and got a phishing email. In my experience working for MSP's it's always been phishing and social engineering. I have never seen a company comprised from some obscure bug in software. This may be different for super large organizations that are international targets, but for the average person or business, you're better off spending time just MFAing everything you can and using common sense.
Had the same experience last time I attempted to report an issue on Hacker One. Triage did not seem to actually understand the issue and insisted on needing a PoC they could run themselves that demonstrated the maximum impact for some reason, even though any developer familiar with the actual code at hand could see the problem in about ten seconds. Ended up writing to some old security email I found for the company to look at the report and they took care of it one day later, so good ending I guess.
This was about an issue in a C++ RPC framework not validating object references are of the correct type during deserialization from network messages, so the actual impact is kind of unbounded.
Side note: what a nice background gradient those guys put into that website! It goes from dark sky blue to dry desert soil at the bottom. Nice artistic touch.
User-supplied EC point validation is one of the most basic yet crucial steps in a sound implementation. I wonder why no one (and no tests) at CloudFlare caught these carelessnesses pre-signoff and pre-release.
The article's deep dive into the math does it a disservice IMO, by making this seem like an arcane and complex issue. This is an EC Cryptography 101 level mistake.
Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.
It's more subtle than that and is not actually that simple (though the attack is). The "modern" curve constructions pioneered by Bernstein are supposed to be misuse-resistant in this regard; Bernstein popularized both Montgomery and Edwards curves. His two major curve implementations are Curve25519 and Ed25519, which are different mathematical representations of the same underlying curve. Curve25519 famously isn't vulnerable to this attack!
Does the “don’t implement your own cryptography” advice apply to multi-billion companies, or it’s just for regular, garden variety developers?
Some of the issues like validating input seem like should have been noticed. But of course one would need to understand how it works to notice it. And certainly, in a company like CF someone would know how this is supposed to work…
Surely the devs would have at least opened wikipedia to read
> In order to avoid small subgroup attacks,[6] all points are verified to lie in an N-torsion subgroup of the elliptic curve, where N is specified as a 246-bit prime dividing the order of the group.
So should they have opted for an inexistent implementation of FourQ in Go so they don't have to roll their own (keeping in mind this is a library for experimental deployment of PQ and ECC)?
They should have found someone who knows what they are doing or not implement it at all. We're talking about a company with a $1B+ yearly revenue here.
They put their name behind it https://blog.cloudflare.com/introducing-circl/ and it looks like whoever they hired to do the work couldn't even read the wikipedia page for the algorithm.
to wit even then the old maxim still applies to _most developers inside cloudflare_. Yes, some global/specialist corps can have actual applied crypto and security. But the vast vast majority of usage should still be using tools developed and tested by actual SMEs.
A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.
At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.
If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.
The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.
Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...
Just for fun, do you happen to have any links to public reports like that? Seems entertaining if nothing else.
I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
Show me the incentives, and I'll show you the outcomes.
We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.
Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.
What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?
I'm genuinely curious about how you suggest we achieve
> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.
So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.
And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."
This was about an issue in a C++ RPC framework not validating object references are of the correct type during deserialization from network messages, so the actual impact is kind of unbounded.
Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.
The only use listed at https://en.wikipedia.org/wiki/FourQ is "FourQ is implemented in the cryptographic library CIRCL, published by Cloudflare."
Some of the issues like validating input seem like should have been noticed. But of course one would need to understand how it works to notice it. And certainly, in a company like CF someone would know how this is supposed to work…
Surely the devs would have at least opened wikipedia to read
https://en.wikipedia.org/wiki/FourQ
> In order to avoid small subgroup attacks,[6] all points are verified to lie in an N-torsion subgroup of the elliptic curve, where N is specified as a 246-bit prime dividing the order of the group.
They put their name behind it https://blog.cloudflare.com/introducing-circl/ and it looks like whoever they hired to do the work couldn't even read the wikipedia page for the algorithm.
Funny if that's true.
Deleted Comment