What is really sad, is that this could be a good pen-test for the kernel. Especially the idea of introducing bugs that are only vulnerabilities when they all come in together.
If only they had contacted the Linux Foundation ahead of time to get permission, and set up terms, like a real pen-test. Then work could be done on detecting, and preventing these sorts of attacks, maybe resulting in a system that could help everyone. At the very least a database for security researchers that are registered, but unknown to maintainers, where they could store hashes of bad commits. Then a check if any of those commits made it through. I know the kernel makes use of rebasing, so that might not be the best approach technically, but something like that. To ease the pain of wasting developers time, sponsors could put up money that the maintainer gets if they catch it, and maybe a smaller amount if they have to revert it.
EDIT: if the Linux Foundation said no, they could have tried another large open source project with a governing body, Apache, Python, Postgres, Firefox, etc. It wouldn't have been as flashy and high profile, but it would have still been the same research, and odds are you would find at least one project willing to participate.
> that this could be a good pen-test for the kernel
Not really. Everyone knows this flaw exists, the interesting part is how to fix it. Did you read the "suggestions" the researchers made in their paper[1]? They're clueless.
It's like pointing out that buildings can be robbed by breaking their windows. No shit. What do you want to do about it?
Not really. Everyone knows this flaw exists, the interesting part is how to fix it. Did you read the "suggestions" the researchers made in their paper[1]? They're clueless
Fair enough. The kernel maintainers are probably much more aware of this than the average open source project. Maybe for some projects it would change their mindset from knowing that this could be happening, to knowing that this will be happening.
Maybe it's a pipe dream, but I have a feeling it could lead to discussions of "what could we have done to catch this automatically," which in turn would lead to better static analysis tools.
Edit: It would be about as useful as pen testing that includes social engineering. That is to say, everyone knows there are dishonest people, but they may not be aware of some of the techniques they use.
The stated motivations of the researchers make it seem like they are not interested in advancing the security of open source software, but rather undermining it.
Maybe it is more productive to just do the opposite. When vulnerabilities are found investigate the committer(s) for possible links, and quality of previous work
I think it depends on how you frame it. If the Linux Foundation thinks this kind of research would generate useful information for the kernel project, then the developers' time wouldn't be wasted, just used in a different, yet productive, way.
I concede that this is not an easy question, because the developers may have different opinions about the usefulness of this exercise, but at the end of the day, maintainers can run their projects how they see fit.
Why would the Linux Foundation get to decide on if those researchers are allowed to experiment on and waste the time of volunteer developers?
Granted, In the case of linux, it might make more sense if it were the combination of the Linux Foundation & Linus. It's their project, they can subject their volunteers to any tests they want. It may drive away some volunteers, but wether to take that risk or not, is up to the project to decide. For something as big as the kernel they may decide to get permission from the individual maintainers, maybe even limit the research to only the subsystems that agree to participate.
In any case, the point I'm trying to make, is that this kind of testing may be beneficial, if the project is aware of it, and agrees to it. Who gets to decide on behalf of the project, would depend on the hierarchy of each project.
Just to clarify, all the proposals that were intentionally vulnerable and were really vulnerabilities were not accepted in the first place. However, that event triggered review of all University of Minnesota proposals, and that's what's being discussed here.
"ALL the proposals that were intentionally vulnerable and were really vulnerabilities were not accepted"
The thing is there is no way you can actually know that. So this is not some kind of revenge. This is rather a valid precaution.
>The thing is there is no way you can actually know that.
We do know since the researchers have told the linux community, the university and IEEE what those patches where. Please do not spread further misinformation about the case.
Please read the IEEE statement and the full Linux TAB review.
Just from a code quality process standpoint, that’s an interesting result. Now I’m wondering what would happen if you picked a set of 150 random kernel patches and told 80 reviewers to re-review them assuming they could be malicious. I bet you’d find quite a few fixes.
I was wondering the same thing, and it appears it would be worth the effort, but getting enough high quality reviewers would be a problem.
Maybe the NCAA could organize competitive code reviewing leagues? I bet you would get e.g. a highly motivated Caltech team reviewing USC contributed patches, and vice versa.
Isn't the moral of the story here that it's probably trival for organizations like the FSB/NSA/Chinese equivalent to get malicious patches accepted into Linux.
Maybe the people reviewing the changes should be unaware of who made them. I am biased and for sure look more closely at some developers pull requests than others.
For the sake of spotting malicious actors, wouldn't it make more sense for reviewers to be aware, so they can focus their attention on patches from untrusted sources?
If only they had contacted the Linux Foundation ahead of time to get permission, and set up terms, like a real pen-test. Then work could be done on detecting, and preventing these sorts of attacks, maybe resulting in a system that could help everyone. At the very least a database for security researchers that are registered, but unknown to maintainers, where they could store hashes of bad commits. Then a check if any of those commits made it through. I know the kernel makes use of rebasing, so that might not be the best approach technically, but something like that. To ease the pain of wasting developers time, sponsors could put up money that the maintainer gets if they catch it, and maybe a smaller amount if they have to revert it.
EDIT: if the Linux Foundation said no, they could have tried another large open source project with a governing body, Apache, Python, Postgres, Firefox, etc. It wouldn't have been as flashy and high profile, but it would have still been the same research, and odds are you would find at least one project willing to participate.
Not really. Everyone knows this flaw exists, the interesting part is how to fix it. Did you read the "suggestions" the researchers made in their paper[1]? They're clueless.
It's like pointing out that buildings can be robbed by breaking their windows. No shit. What do you want to do about it?
[1] https://twitter.com/SarahJamieLewis/status/13848800341465743...
Fair enough. The kernel maintainers are probably much more aware of this than the average open source project. Maybe for some projects it would change their mindset from knowing that this could be happening, to knowing that this will be happening.
Maybe it's a pipe dream, but I have a feeling it could lead to discussions of "what could we have done to catch this automatically," which in turn would lead to better static analysis tools.
Edit: It would be about as useful as pen testing that includes social engineering. That is to say, everyone knows there are dishonest people, but they may not be aware of some of the techniques they use.
[0] https://news.ycombinator.com/item?id=26888129
Granted, In the case of linux, it might make more sense if it were the combination of the Linux Foundation & Linus. It's their project, they can subject their volunteers to any tests they want. It may drive away some volunteers, but wether to take that risk or not, is up to the project to decide. For something as big as the kernel they may decide to get permission from the individual maintainers, maybe even limit the research to only the subsystems that agree to participate.
In any case, the point I'm trying to make, is that this kind of testing may be beneficial, if the project is aware of it, and agrees to it. Who gets to decide on behalf of the project, would depend on the hierarchy of each project.
We do know since the researchers have told the linux community, the university and IEEE what those patches where. Please do not spread further misinformation about the case.
Please read the IEEE statement and the full Linux TAB review.
https://www.ieee-security.org/TC/SP2021/downloads/2021_PC_St...
https://lkml.org/lkml/2021/5/5/1244
Maybe the NCAA could organize competitive code reviewing leagues? I bet you would get e.g. a highly motivated Caltech team reviewing USC contributed patches, and vice versa.
Deleted Comment
Deleted Comment
Deleted Comment
Starting to feel like pen testing is a broken profession.