The professor gets exactly what they want here, no?
"We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".
That'll be a fun paper to write, no doubt.
Additional context:
* One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith[1].
Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits [2].
The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.
As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?
* Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. [3]
[1]: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2
As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".
Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.
Apologies in advance if my questions are off the mark, but what does this mean in practice?
1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?
2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?
3. Will there be a post-mortem for this attack/attempted attack?
As an Alumni of the University of Minnesota's program I am appalled this was even greenlit. It reflects poorly on all graduates of the program, even those uninvolved. I am planning to email the department head with my disapproval as an alumni, and I am deeply sorry for the harm this caused.
I would implore you to maintain the ban, no matter how hard the university tries to make ammends. You sent a very clear message that this type of behavior will not be tolerated, and organizations should take serious measures to prevent malicious activities taking place under their purview. I commend you for that. Thanks for your hard work and diligence.
I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).
Would a project like this be unfeasible due to the sheer amount of commits/day?
Putting the ethical question of the researcher aside, the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.
Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.
This might not be on purpose. If you look at their article they're studying how to introduce bugs that are hard to detect not ones that are easy to detect.
Well, you or whoever was the responsible maintainer completely failed in reviewing these patches, which is your whole job as a maintainer.
Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.
Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.
If the IRB is any good the professor doesn't get that. Universities are publish or perish, and the IRB should force the withdrawal of all papers they submitted. This is might be enough to fire the professor with cause - including remove any tenure protection they might have - which means they get a bad reference.
I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time. (there is the possibility that these are good faith patches and someone in the linux community just hates this person - seems unlikely but until a proper independent investigation is done I'll leave that open.)
> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.
This is, at the very least, worth an investigation from an ethics committee.
First of all, this is completely irresponsible, what if the patches would've made their way into a real-life device? The paper does mention a process through which they tried to ensure that doesn't happen, but it's pretty finicky. It's one missed email or one bad timezone mismatch away from releasing the kraken.
Then playing the slander victim card is outright stupid, it hurts the credibility of actual victims.
The mandate of IRBs in the US is pretty weird but the debate about whether this was "human subject research" or not is silly, there are many other ethical and legal requirements to academic research besides Title 45.
> I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time
That'd be great, yup. And the linux kernel team should then strongly consider undoing the blanket ban, but not until this investigation occurs.
Interestingly, if all that happens, that _would_ be an intriguing data point in research on how FOSS teams deal with malicious intent, heh.
We presented students with an education protocol designed to make a blind subset of them fail tests. Then measured if they failed the test to see if they independently learned the true meaning of the information.
Under any sane IRB you would need consent of the students. This is failure on so many levels.
I'm really not sure what the motive to lie is. You got caught with your hand in the cookie jar, time to explain what happened before they continue to treat you like a common criminal. Doing a pentest and refusing to state it was a pentest is mind boggling.
Has anyone from the "research" team commented and confirmed this was even them or a part of their research? It seems like the only defense is from people who did google-fu for a potentially outdated paper. At this point we can't even be sure if this isn't a genuinely malicious actor using comprimised credentials to introduce vulnerabilities.
It's also not a pen test. Pen testing is explicitly authorized, where you play the role as an attacker, with consent from your victim, in order to report security issues to your victim. This is just straight-up malicious behavior, where the "researchers" play the role as an attacker, without consent from their victim, for personal gain (in this case, publishing a paper).
Hearing how you phrased it reminds me of a study that showed how parachutes do not in fact save lives (the study was more to show the consequences of extrapolating data, so the result should not be taken seriously):
Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials.Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
With the footnote: Contributors: GCSS had the original idea. JPP tried to talk him out of it. JPP did the first literature search but GCSS lost it. GCSS drafted the manuscript but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it serves him right
I liked this bit, from the footnotes: "Contributors: RWY had the original idea but was reluctant to say it out loud for years. In a moment of weakness, he shared it with MWY and BKN, both of whom immediately recognized this as the best idea RWY will ever have."
Well part of the experiment is to see how deliberate malicious commits are handled. Banning is the result. They got what they wanted. Play stupid game. Win stupid pri[z]e.
"The University of Minnesota Department of Computer Science & Engineering takes this situation extremely seriously. We have immediately suspended this line of research."
But this raises an obvious question: Doesn't Linux need better protection against someone intentionally introducing security vulnerabilities? If we have learned anything from the SolarWinds hack, it is that if there is a way to introduce a vulnerability then someone will do it, sooner or later. And they won't publish a paper about it, so that shouldn't be the only way to detect it!
So, it turns out that sometimes programmers introduce bugs into software. Sometimes intentionally, but much more commonly accidentally.
If you've got a suggestion of a way to catch those bugs, please be more specific about it. Just telling people that they need "better protection" isn't really useful or actionable advice, or anything that they weren't already aware of.
That question has been obvious for quite some time. It is always possible to introduce subtle vulnerabilities. Research has tried for decades to come up with a solution, to no real avail.
The problem with such experiment is that it can be a front. If you are a big entity, gov, or whatever, and you need to insert a vulnerability in the kernel, you can start a "research project". Then you try to inject it with this pretense, and if it fails, you can always say "my bad, it was for science".
I had a uni teacher who thought she was a genius because her research team powdered wikipedia with fake information while timing how long it took to remove them.
"Earth is center of universe" took 1000 years to remove from books, I'm not sure what her point was :D
Joke's on you - this was really sociology research on anger response levels of open source communities when confronted with things that look like bad faith.
Setting aside the ethical aspects which others have covered pretty thoroughly, they may have violated 18 U.S.C. §1030(a)(5) or (b). This law is infamously broad and intent is easier to "prove" than most people think, but #notalawyer #notlegaladvice. Please don't misinterpret this as a suggestion that they should or should not be prosecuted.
So, the patch was about a possible double-free, detected presumably from a bad static analyzer. Couldn't this patch have been done in good faith? That's not at all impossible.
However, the prior activity of submitting bad-faith code is indeed pretty shameful.
I'm not a linux kernel maintainer but it seems like the maintainers all agree it's extremely unlikely a static analyzer could be so wrong in so many different ways.
I think this hasn't gone far enough. The university has shown that it is willing to allow its members to act in bad faith for their own interests, under the auspices of acting ethically for scientific reasons. The university itself cannot be trusted _ever again_.
Black list the whole lot from everything, everywhere. Black hole that place and nuke it from orbit.
What would be the point? Of course people can miss things in code review. Yet the Linux developer base and user base has decided that generally an open submission policy has benefits that outweigh the risks.
Should every city park with a "no alcohol" policy conduct red teams on whether it's possible to smuggle alcohol in? Should police departments conduct red teams to see if people can get away with speeding?
Not that I approve of the methods, but why would an IRB be involved in a computer security study? IRBs are for human subjects research. If we have to run everything that looks like any kind of research through IRBs, the Western gambit on technical advantage is going to run into some very hard times.
The subjects were the kernel team. They should have had consent to be part of this study. It's like red team testing, someone somewhere has to know about it and consent to it.
It wasn’t a real experiment, it was a legitimate attempt to insert bugs into the code base and this professor was going to go on speaking tours to self promote and talk about how easy it was to crack Linux. If it looks like grift it’s probably grift. This was never about science.
Yet another reason to absolutely despise the culture within academia. The US Federal government is subsidizing a collection of pathologically toxic institutions, and this is one of many results, along with HR departments increasingly mimicking the campus tribalism.
Nothing. However if they can't claim ownership of the drama they have caused it's not useful for research that's publishable so it does nix these idiots from causing further drama while working at this institution. For now.
> What's preventing those bad actors from not using a UMN email address?
Technically none, but by banning UMN submissions, the kernel team have sent an unambiguous message that their original behaviour is not cool. UMN's name has also been dragged through the mud, as it should be.
Prof Lu exercised poor judgement by getting people to submit malicious patches. To use further subterfuge knowing that you've been already been called out on it would be monumentally bad.
I don't know how far Greg has taken this issue up with the university, but I would expect that any reasonable university would give Lu a strong talking-to.
Nothing. I think the idea is 60% deterrence via collective punishment - "if we punish the whole university, people will be less likely to do this in future" - and 40% "we must do something, and this is something, therefore we must do it".
If they just want to be jerks, yes. But they can't then use that type of "hiding" to get away with claiming it was done for a University research project as that's even more unethical than what they are doing now.
Isn't this reaction a bit like the emperor banishing anyone who tells him that his new clothes are fake? Are the maintainers upset that someone showed how easy it is to subvert kernel security?
It’s more like the emperor banning a group of people who put the citizens in danger just so they could show that it could be done. The researchers did something unethical and acted in a self-serving manner. It’s no surprise that someone would get kicked out of a community after seriously breaking the trust of that community.
> Because of this, I will now have to ban all future contributions from
your University.
Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.
EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.
In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].
Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.
> I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018
New plan: Show up at Liu's house with a lock picking kit while he's away at work, pick the front door and open it, but don't enter. Send him a photo, "hey, just testing, bro! Legitimate security research!"
If they wanted to do security research, they could have done so in the form of asking the reviewers to help; send them a patch and ask 'Is this something you would accept?', instead of intentionally sending malicious commits and causing static on the commit tree and mailing lists.
This is funny, but not at all a good analogy. There's obviously not remotely as much public interest or value in testing the security of this professor's private home to justify invading his privacy for the public interest. On the other hand, if he kept dangerous things at home (say, BSL-4 material), then his house would need 24/7 security and you'd probably be able to justify testing it regularly for the public's sake. So the argument here comes down to which extreme you believe the Linux kernel is closer to.
I wouldn't be surprised if the good, conscientious members of the UMN community showed up at his office (or home) door to explain, in vivid detail, the consequences of doing unethical research.
The actual equivalent would be to steal his computer, wait a couple days to see his reaction, get a paper published, then offer to return the computer.
If this experience doesn't change not only the behavior of U of M's IRB but inform the behavior of every other IRB, then nothing at all is learned from this experience.
Unless both the professors and leadership from the IRB aren't having an uncomfortable lecture in the chancellor's office then nothing at all changes.
This is not responsible research. This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.
There are a lot of people to feel bad for, but none is at the University of Minnesota. Think of the Austrians.
No, it's totally okay to feel sorry for good, conscientious researchers and students at the University of Minnesota who have been working on the kernel in good faith. It's sad that the actions of irresponsible researchers and associated review boards affect people who had nothing to do with professor Lu's research.
It's not wrong for the kernel community to decide to blanket ban contributions from the university. It obviously makes sense to ban contributions from institutions which are known to send intentionally buggy commits disguised as fixes. That doesn't mean you can't feel bad for the innocent students and professors.
> This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.
This analogy is invalid, because:
1. The experiment is not on live, deployed, versions of the kernel.
2. There are mechanisms in place for preventing actual merging of the faulty patches.
3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.
All of the above is not true for the in-flight airline.
However - I'm not claiming the experiment was not ethically faulty. Certainly, the U Minnesota IRB needs to issue a report and an explanation on its involvement in this matter.
It's important to note that they used temporary emails for the patches in this research. It's detailed in the paper.
The main problem is that they have (so far) refused to explain in detail how the patches where reviewed and how. I have not gotten any links to any lkml post even after Kangjie Lu personally emailed me to address any concerns.
Seems like a bit of a strong response. Universities are large places with lots of professors and people with different ideas, opinions, views, and they don't work in concert, quite the opposite. They're not some corporation with some unified goal or incentives.
I like that. That's what makes universities interesting to me.
I don't like the standard here of of penalizing or lumping everyone there together, regardless of they contribute in the past, now, in the future or not.
The goal is not penalizing or lumping everyone together. The goal is to have the issue fixed in the most effective manner. It's not the Linux team's responsibility to allow contributions from some specific university, it's the university's. This measure enforces that responsibility. If they want access, they should rectify.
One way to get everyone in a university on the same page is to punish them all for the bad actions of a few. It appears like this won't work here because nobody else is contributing and so they won't notice.
This was approved by the university ethics board so if trust of the university is by part because the actions of the students need to pass an ethics bar it makes sense to remove that trust until the ethics committee has shown that they have improved.
I'd concur: the university is the wrong unit-of-ban.
For example: what happens when the students graduate- does the ban follow them to any potential employers? Or if the professor leaves for another university to continue this research?
Does the ban stay with UMN, even after everyone involved left? Or does it follow the researcher(s) to a new university, even if the new employer had no responsibility for them?
It's the university that allowed the research to take place. It's the university's responsibility to fix their own organisation's issues. The kernel has enough on their plate than to have to figure out who at the university is trustworthy and who isn't considering their IRB is clearly flying blind.
that is completely irrelevant. they are acting under the university, and their "Research" is backed by university and approved by university's department.
if university has a problem, then they should first look into managing this issue at their end, or force people to use personal email ids for such purposes
I don't feel sorry at all. If you want to contribute from there, show that the rogue professor and their students have been prevented from doing further malicious contributions (that is probably at least: from doing any contribution at all during a quite long period -- and that is fair against repeated infractions), and I'm sure that you will be able to contribute back again under the University umbrella.
If you don't manage to reach that goal, too bad, but you can contribute on a personal capacity, and/or go work elsewhere.
How could a single student or professor possibly achieve that? Under the banner of "academic freedom" it is very hard to get someone fired because you don't like their research.
It sounds like you're making impossible demands of unrelated people, while doing nothing to solve the actual problem because the perpetrators now know to just create throwaway emails when submitting patches.
It definitely would suck to be someone at UMN doing legitimate work, but I don't think it's reasonable to ask maintainers to also do a background check on who the contributor is and who they're advised by.
How thorough is IRB review? My gut feeling is that these are not necessarily the most conscientious or informed bodies. Add into the mix a proposal that conceals the true nature of what's happening.
(All of this ASSUMING that the intent was as described in the thread.)
seems extreme. one unethical researcher blocks work for others just because they happen to work at the same employer? they might not even know the author of the paper...
The university reviewed the "study" and said it was acceptable. From the email chain, it looks like they've already complained to the university multiple times, and have apparently been ignored. Banning anyone at the university from contributing seems like the only way to handle it since they can't trust the institution to ensure its students are doing unethical experiments.
Well, the decision can always be reversed, but on the outset I would say banning the entire university and publicly naming them is a good start. I don't think this kind of "research" is ethical, and the issue needs to be raised. Banning them is a good opener to engage the instiution in a dialogue.
It is an extreme response to an extreme problem. If the other researchers don't like the situation? They are free to raise the problem to the university and have the university clean up the mess they obviously have.
Well, shit happens. Imaging doctors working in organ transplants, and one of them damages trust of people by selling access to organs to rich patients. Of course that damages the field for everyone. And to deal with such issues, doctors have some ethics code, and in many countries associations which will sanction bad eggs. Perhaps scientists need something like that, too?
> Not a big loss: these professors likely hate open source.
> They are conducting research to demonstrate that it is easy to introduce bugs in open source...
That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.
(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)
> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards
How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.
> It's likely a university with professors that hate open source.
This is a ridiculous conclusion. I do agree with the kernel maintainers here, but there is no way to conclude that the researchers in question "hate open source", and certainly not that such an attitude is shared by the university at large.
> the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards
That's not true at all. There are many internet-critical projects with tons of holes that are not found for decades, because nobody except the core team ever looks at the code. You have to actually write tests, do fuzzing, static/memory analysis, etc to find bugs/security holes. Most open source projects don't even have tests.
Assuming people are always looking for bugs in FOSS projects is like assuming people are always looking for code violations in skyscrapers, just because a lot of people walk around them.
> (whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)
Which is why there have never been multi-year critical security vulnerabilities in FOSS software.... right?
Sarcasm aside, because of how FOSS software is packaged on Linux we've seen critical security bugs introduced by package maintainers into software that didn't have them!
Some clarifications since they are unclear in the original report.
- Aditya Pakki (the author who sent the new round of seemingly bogus patches) is not involved in the S&P 2021 research. This means Aditya is likely to have nothing to do with the prior round of patching attempts that led to the S&P 2021 paper.
- According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.
Greg has all reasons to be unhappy since they were unknowingly experimented on and used as lab rats. However, the round of patches that triggered his anger *are very likely* to have nothing to do with the three intentionally incorrect patch attempts leading to the paper. Many people on HN do not seem to know this.
There is no doubt that Kangjie is involved in Aditya's research work, which leads to bogus patches sent to Linux devs. However, based on my understanding of how CS research groups usually function, I do not think Kangjie knew the exact patches that Aditya sent out. In this specific case, I feel Aditya is more likely the one to blame: He should have examined these automatically generated patches more carefully before sending them in for reviewing.
Aditya's story about the new patches is that he was writing a static analysis tool and was testing it by... submitting PRs to the Linux kernel? He's either exploiting the Linux maintainers to test his new tool, or that story's bullshit. Even taking his story at face value is justification to at least ban him personally IMO.
Sounds like these commits aren't related to that paper, they're related to the next paper he's working on, and the next one is making the same category error about human subjects in his study.
he claims "Hi there! My name is Aditya and I’m a second year Ph.D student in Computer Science & Engineering at the University of Minnesota. My research interests are in the areas of computer security, operating systems, and machine learning. I’m fortunate to be advised by Prof. Kangjie Lu."
so he in no uncertain terms is claiming that he is being advised in his research by Kangjie Lu. So it's incorrect to say his patches have nothing to do with the paper.
I would encourage you not to post people's contact information publicly, specially in a thread as volatile as this one. Writing "He claims in his personal website" would bring the point across fine.
This being the internet, I'm sure the guy is getting plenty of hate mail as it is. No need to make it worse.
> So it's incorrect to say his patches have nothing to do with the paper.
Professors usually work on multiple projects, which involve different grad students, at the same time. Aditya Pakki could be working on a different project with Kangjie Lu, and not be involved with the problematic paper.
> S&P 2021 paper did not introduce any bugs into Linux kernel.
I used to work as an auditor. We were expected to conduct our audits to neither expect nor not expect instances of impropriety to exist. However, once we had grounds to suspect malfeasance, we were "on alert", and conduct tests accordingly.
This is a good principle that could be applied here. We could bat backwards and forwards about whether the other submissions were bogus, but the presumption must now be one of guilt rather than innocence.
Personally, I would have been furious and said, in no uncertain terms, that the university keep a low profile and STFU lest I be sufficiently provoked to taking actions that lead to someone's balls being handed to me on a plate.
What sort of lawsuit might they bring against a university whose researchers deliberately inserted malicious code into software that literally runs a good portion of the world?
I'm no lawyer, but it seems like there'd be something actionable.
On a side note, this brings into question any research written by any of the participating authors, ever. No more presumption of good faith.
> According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.
Except that at least one of those three, did [0]. The author is incorrect that none of their attempts became git commits. Whatever process that they used to "check different versions of Linux and further confirmed that none of the incorrect patches was adopted" was insufficient.
> The author is incorrect that none of their attempts became git commits
That doesn't appear to be one of the three patches from the "hypocrite commits" paper, which were reportedly submitted from pseudononymous gmail addresses. There are hundreds of other patches from UMN, many from Pakki[0], and some of those did contain bugs or were invalid[1], but there's currently no hard evidence that Pakki was deliberately making bad-faith commits--just the association of his advisor being one of the authors of the "hypocrite" paper.
But Kanjie Lu, Pakki’s advisor, was one of the authors. The claim that “ You, and your group, have publicly admitted to sending known-buggy patches” may not be totally accurate (or it might be—Pakki could be on other papers I’m not aware of), but it’s not totally inaccurate either. Most academic work is variations on a theme, so it’s reasonable to be suspect of things from Lu’s group.
As Greg KH notes, he has no time to deal with such BS, when suggested to write a formal complain. He has no time to play detectives: you are involved in a group that does BS and this smell like BS again, banned.
It shouldn’t be up to the victim to sort that out. The only thing that could perhaps have changed here is for the university wide ban to have been announced earlier. Perhaps the kernel devs assumed that no one would be so shameless as to continue to send students back to someone they had already abused.
The person in power here is Greg KH. It seems like he can accept/reject/ban anyone for any reason with little recourse for the counter-party. I'm willing to withhold judgement on these allegations until the truth comes out. Seems like many here want retribution before any investigation.
There's only one way the kernel dev team can afford to look at this: A bad actor tried to submit malicious code to the kernel using accounts on the U of M campus. They can't afford to assume that the researchers weren't malicious, because they didn't follow the standards of security research and did not lay out rules of engagement for the pentest. Because that trust was violated, and because nobody in the research team made the effort to contact the appropriate members of the dev team (in this case, they really shoulda taken it to Torvalds), the kernel dev team can't risk taking another patch from U of M because it might have hidden vulns in it. For all we know, Aditya Pakki is a pseudonym. For all we know, the researchers broke into Aditya's email account as part of their experiment--they've already shown that they have a habit of ignoring best practices in infosec and 'forgetting' to ask permission before conducting a pentest.
From his message, the ones that triggered his anger were patches he believed to be obviously useless and therefore either incompetently submitted or submitted as some other form experimentation. After the intentionally incorrect patches, he could no longer allow the presumption of good faith.
It doesn't matter. I think this is totally appropriate. A group of students are submitting purposely buggy patches? It isn't the kernels team to sift through and distinguish they come down and nuke the entire university. This sends a message to any other University thinking of a similar stunt you try this bull hockey you and your entire university are going to get caught in the blast radius.
On the plus side, I guess they get a hell of a result for that research paper they were working on.
"We sought to probe vulnerabilities of the open-source public-development process, and our results include a methodology for getting an entire university's email domain banned from contributing."
I read through that clarification doc. I don't like their experiment but I have to admit their patch submission process is responsible (after receiving a "looks good" for the bad patch, point out the flaw in the patch, give the correct fix and make sure the bad patch doesn't get into the tree).
This isn't friendly pen-testing in a community, this is an attack on critical infrastructure using a university as cover. The foundation should sue the responsible profs personally and seek criminal prosecution. I remember a bunch of U.S. contractors said they did the same thing to one of the openbsd vpn library projects about 15 years ago as well.
What this professor is proving out is that open source and (likely, other) high trust networks cannot survive really mendacious participants, but perhaps by mistake, he's showing how important it is to make very harsh and public examples of said actors and their mendacity.
I wonder if some of these or other bug contributors have also complained that the culture of the project governance is too aggressive, that project leads can create an unsafe environment, and discourage people from contributing? If counter-intelligence prosecutors pull on this thread, I have no doubt it will lead to unravelling a much broader effort.
Not everything can be fixed with the criminal justice system. This should be solved with disciplinary action by the university (and possibly will be [1]).
I am not knowledgeable enough to know if this intent is provable, but if someone can frame the issue appropriately, it feels like it could be good to report this to the FBI tip line so it is at least on their radar.
Organizing an effort, with a written mandate, to knowingly introduce kernel vulnerabilities, through deception, that will spread downstream into other Linux distributions, likely including firmware images, which may not be patched or reverted for months or years - does not warrant a criminal investigation?
The foundation should use recourse to the law to signal they are handling it, if only to prevent these profs from being mobbed.
Here's a clarification from the Researchers over at UMN[1].
They claim that none of the Bogus patches were merged to the Stable code line :
>Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.
The response makes the researchers seem clueless, arrogant, or both - are they really surprised that kernel maintainers would get pissed off at someone deliberately wasting their time?
From the post:
* Does this project waste certain efforts of maintainers?
Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study. However, to minimize the wasted time, (1) we made the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we tried hard to find three real bugs, and the patches ultimately contributed to fixing them
"Yes, this wastes maintainers time, but we decided we didn't care."
Fascinating that the research was judged not to involve human subjects....
As someone not part of academia, how could this research be judged to not involve people? It _seems_ obvious to me that the entire premise is based around tricking/deceiving the kernel maintainers.
No one says "wasted their precious time" in a sincere apology. The word 'precious' here is exclusively used for sarcasm in the context of an apology, as it does not represent a specific technical term such as might appear in a gemology apology.
Or more charitably: "Yes, this spent some maintainers time, but only a small amount and it resulted in bugfixes, which is par for the course of contributing to linux"
Your honor, I tried to find any solution for testing this new poison without poisoning a bunch of people, but I carefully considered it and I couldn't find any, so I went ahead and secretly poisoned them. Clearly, I am innocent! Though I sincerely apologize for any inconvenience caused.
In the end, the damage has been done and the Linux developers are now going back and removing all patches from any user with a @umn.edu email.
Not sure how the researchers didn't see how this would backfire, but it's a hopeless misuse of their time. I feel really bad for the developers who now have to spend their time fixing shit that shouldn't even be there, just because someone wanted to write a paper and their peers didn't see any problems either. How broken is academia really?
This, in of itself, is a finding. The researchers will justify their research with "we were banned which is a possible outcome of this kind of research..." I find this disingenuous. When a community of open source contributors is partially built on trust, then violators can and will be banned.
The researchers should have approached the maintainers got get buy in, and setup a methodology where a maintainer would not interfere until a code merge was immanent, and just play referee in the mean time.
I feel the same way. People don't understand how it is difficult to be a maintainer. This is very selfish behaviour. Appreciate Greg's strong stance against it.
> I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.
That's because those are two separate incidents. The study which resulted in 3 patches was completed some time last year, but this new round of patches is something else.
It's not clear whether the patches are coming from the same professor/group, but it seems like the author of these bogus patches is a Phd student working with the professor who conducted that study last year. So there is at least one connection.
EDIT: also, those 3 patches were supposedly submitted using a fake email address according to the "clarification" document released after the paper was published. So they probably didn't use a @umn.edu email at all.
To help clarify for purposes of continuing the discussion the original research did address the issue of minimizing the time of the reviewers [1] [2]. Seems the maintainers were OK with that as no actions were taken other than an implied request to stop that kind of research.
Now a different researcher from UMN, Aditya Pakki, has submitted a patch which contains bugs that seems to be attempting to do the same type of pen testing although the PhD student denied it.
> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.
2. Clarifications on the “hypocrite commit” work (FAQ)
"* Does this project waste certain efforts of maintainers?
Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study. However, to minimize the wasted time, (1) we made the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we tried hard to find three real bugs, and the patches ultimately contributed to fixing them."
Agreed. This feel more like an involuntary social experiment and it just uses up the Kernel maintainers bandwidth.
Reviewing code is difficult, even more so when the committer is set out to introduce bad code in the first place.
It's disrespectful to people who are contributing their personal time while working for free on open source projects.
With more than 60% of all acedemic publications not being reproducible [1], one would think academia has better things to do than wasting other people's time.
I wonder why they didn't just ask in advance. Something like 'we would like to test your review process over the next 6 months and will inform you before a critical patch hits the users', might have been a win-win scenario.
That doesn't conflict with the statement. If the IRB looks at something and exempts it, it has no reason to report that to the hierarchy in any way, because that's a routine process.
Not sure how this university is run but this doesn't sound plausible to me.
>... learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel
And this sounds like mainly a lot of damage control is going to happen.
>We will report our findings back to the community as soon as practical.
Why does it sound implausible? In any uni I've interacted with, profs did pretty much their own thing and without a reason very little attention is paid to how they do it (or even what they do).
Edit 2: Now that the first responses to the reversion are trickling in, some merged patched were indeed discovered to be malicious, like the following. Most of them seem to be fine though or at least non malicious. https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
https://news.ycombinator.com/item?id=26887670&p=2
https://news.ycombinator.com/item?id=26887670&p=3
https://news.ycombinator.com/item?id=26887670&p=4
https://news.ycombinator.com/item?id=26887670&p=5
https://news.ycombinator.com/item?id=26887670&p=6
https://news.ycombinator.com/item?id=26887670&p=7
(Posts like this will go away once we turn off pagination. It's a workaround for performance, which we're working on fixing.)
Also, https://www.neowin.net/news/linux-bans-university-of-minneso... gives a bit of an overview. (It was posted at https://news.ycombinator.com/item?id=26889677, but we've merged that thread hither.)
Edit: related ongoing thread: UMN CS&E Statement on Linux Kernel Research - https://news.ycombinator.com/item?id=26895510 - April 2021 (205 comments and counting)
"We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".
That'll be a fun paper to write, no doubt.
Additional context:
* One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith[1].
Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits [2].
The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.
As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?
* Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. [3]
[1]: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2
[2]: https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
[3]: https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...
I also now have submitted a patch series that reverts the majority of all of their contributions so that we can go and properly review them at a later point in time: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".
Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.
So thanks. Keep it up.
From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have already reached the stable trees.
Apologies in advance if my questions are off the mark, but what does this mean in practice?
1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?
2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?
3. Will there be a post-mortem for this attack/attempted attack?
What a joke - not sure how they can rationalize this as valuable behavior.
Also to assume _all_ commits made by UMN, beyond what's been disclosed in the paper, are malicious feels a bit like an overreaction.
I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).
Would a project like this be unfeasible due to the sheer amount of commits/day?
Are you not concerned these malicious "researches" will simply start using throwaway gmail addresses?
Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.
While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.
THANK YOU! After reading the email chain, I have a much greater appreciation for the work you do for the community!
Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.
Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.
I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time. (there is the possibility that these are good faith patches and someone in the linux community just hates this person - seems unlikely but until a proper independent investigation is done I'll leave that open.)
https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i...
> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.
First of all, this is completely irresponsible, what if the patches would've made their way into a real-life device? The paper does mention a process through which they tried to ensure that doesn't happen, but it's pretty finicky. It's one missed email or one bad timezone mismatch away from releasing the kraken.
Then playing the slander victim card is outright stupid, it hurts the credibility of actual victims.
The mandate of IRBs in the US is pretty weird but the debate about whether this was "human subject research" or not is silly, there are many other ethical and legal requirements to academic research besides Title 45.
That'd be great, yup. And the linux kernel team should then strongly consider undoing the blanket ban, but not until this investigation occurs.
Interestingly, if all that happens, that _would_ be an intriguing data point in research on how FOSS teams deal with malicious intent, heh.
I think the real problem is rooted more fundamentally in academia than it seems. And I think it has mostly to do with a lack of ethics!
We presented students with an education protocol designed to make a blind subset of them fail tests. Then measured if they failed the test to see if they independently learned the true meaning of the information.
Under any sane IRB you would need consent of the students. This is failure on so many levels.
(edit to fix typo)
Has anyone from the "research" team commented and confirmed this was even them or a part of their research? It seems like the only defense is from people who did google-fu for a potentially outdated paper. At this point we can't even be sure if this isn't a genuinely malicious actor using comprimised credentials to introduce vulnerabilities.
https://www.bmj.com/content/363/bmj.k5094
Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials.Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.
With the footnote: Contributors: GCSS had the original idea. JPP tried to talk him out of it. JPP did the first literature search but GCSS lost it. GCSS drafted the manuscript but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it serves him right
People got swatted for less.
As heard frequently on ASP, along with "Room Temperature Challenge."
"The University of Minnesota Department of Computer Science & Engineering takes this situation extremely seriously. We have immediately suspended this line of research."
If you've got a suggestion of a way to catch those bugs, please be more specific about it. Just telling people that they need "better protection" isn't really useful or actionable advice, or anything that they weren't already aware of.
Yes, it does.
Now, how do you do that other than having fallible people review things?
"Earth is center of universe" took 1000 years to remove from books, I'm not sure what her point was :D
However, the prior activity of submitting bad-faith code is indeed pretty shameful.
It's a different university, but I wonder if these people will see the same result.
Black list the whole lot from everything, everywhere. Black hole that place and nuke it from orbit.
Should every city park with a "no alcohol" policy conduct red teams on whether it's possible to smuggle alcohol in? Should police departments conduct red teams to see if people can get away with speeding?
I don't think they're a professor are they? Says they're a PhD student?
What's preventing those bad actors from not using a UMN email address?
Technically none, but by banning UMN submissions, the kernel team have sent an unambiguous message that their original behaviour is not cool. UMN's name has also been dragged through the mud, as it should be.
Prof Lu exercised poor judgement by getting people to submit malicious patches. To use further subterfuge knowing that you've been already been called out on it would be monumentally bad.
I don't know how far Greg has taken this issue up with the university, but I would expect that any reasonable university would give Lu a strong talking-to.
They gain some trust comming from university email addresses
If they just want to be jerks, yes. But they can't then use that type of "hiding" to get away with claiming it was done for a University research project as that's even more unethical than what they are doing now.
> Because of this, I will now have to ban all future contributions from your University.
Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.
EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.
In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].
Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.
[1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22
[2]: https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...
[3]: https://www-users.cs.umn.edu/~kjlu/
[4]: http://cobweb.cs.uga.edu/~wenwen/
New plan: Show up at Liu's house with a lock picking kit while he's away at work, pick the front door and open it, but don't enter. Send him a photo, "hey, just testing, bro! Legitimate security research!"
That's the university's problem to fix.
Unless both the professors and leadership from the IRB aren't having an uncomfortable lecture in the chancellor's office then nothing at all changes.
Deleted Comment
There are a lot of people to feel bad for, but none is at the University of Minnesota. Think of the Austrians.
It's not wrong for the kernel community to decide to blanket ban contributions from the university. It obviously makes sense to ban contributions from institutions which are known to send intentionally buggy commits disguised as fixes. That doesn't mean you can't feel bad for the innocent students and professors.
This analogy is invalid, because:
1. The experiment is not on live, deployed, versions of the kernel.
2. There are mechanisms in place for preventing actual merging of the faulty patches.
3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.
All of the above is not true for the in-flight airline.
However - I'm not claiming the experiment was not ethically faulty. Certainly, the U Minnesota IRB needs to issue a report and an explanation on its involvement in this matter.
The main problem is that they have (so far) refused to explain in detail how the patches where reviewed and how. I have not gotten any links to any lkml post even after Kangjie Lu personally emailed me to address any concerns.
I like that. That's what makes universities interesting to me.
I don't like the standard here of of penalizing or lumping everyone there together, regardless of they contribute in the past, now, in the future or not.
For example: what happens when the students graduate- does the ban follow them to any potential employers? Or if the professor leaves for another university to continue this research?
Does the ban stay with UMN, even after everyone involved left? Or does it follow the researcher(s) to a new university, even if the new employer had no responsibility for them?
if university has a problem, then they should first look into managing this issue at their end, or force people to use personal email ids for such purposes
If you don't manage to reach that goal, too bad, but you can contribute on a personal capacity, and/or go work elsewhere.
It sounds like you're making impossible demands of unrelated people, while doing nothing to solve the actual problem because the perpetrators now know to just create throwaway emails when submitting patches.
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
(All of this ASSUMING that the intent was as described in the thread.)
Deleted Comment
Deleted Comment
They are conducting research to demonstrate that it is easy to introduce bugs in open source...
(whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)
[removed this ranting that does not apply since they are contributing a lot to the kernel in good ways too]
> They are conducting research to demonstrate that it is easy to introduce bugs in open source...
That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.
(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)
> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards
How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.
This is a ridiculous conclusion. I do agree with the kernel maintainers here, but there is no way to conclude that the researchers in question "hate open source", and certainly not that such an attitude is shared by the university at large.
That's not true at all. There are many internet-critical projects with tons of holes that are not found for decades, because nobody except the core team ever looks at the code. You have to actually write tests, do fuzzing, static/memory analysis, etc to find bugs/security holes. Most open source projects don't even have tests.
Assuming people are always looking for bugs in FOSS projects is like assuming people are always looking for code violations in skyscrapers, just because a lot of people walk around them.
Which is why there have never been multi-year critical security vulnerabilities in FOSS software.... right?
Sarcasm aside, because of how FOSS software is packaged on Linux we've seen critical security bugs introduced by package maintainers into software that didn't have them!
- Aditya Pakki (the author who sent the new round of seemingly bogus patches) is not involved in the S&P 2021 research. This means Aditya is likely to have nothing to do with the prior round of patching attempts that led to the S&P 2021 paper.
- According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.
Greg has all reasons to be unhappy since they were unknowingly experimented on and used as lab rats. However, the round of patches that triggered his anger *are very likely* to have nothing to do with the three intentionally incorrect patch attempts leading to the paper. Many people on HN do not seem to know this.
[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
[1] https://adityapakki.github.io/assets/files/aditya_cv.pdf
Deleted Comment
https://adityapakki.github.io/
In this "About" page:
https://adityapakki.github.io/about/
he claims "Hi there! My name is Aditya and I’m a second year Ph.D student in Computer Science & Engineering at the University of Minnesota. My research interests are in the areas of computer security, operating systems, and machine learning. I’m fortunate to be advised by Prof. Kangjie Lu."
so he in no uncertain terms is claiming that he is being advised in his research by Kangjie Lu. So it's incorrect to say his patches have nothing to do with the paper.
This being the internet, I'm sure the guy is getting plenty of hate mail as it is. No need to make it worse.
Professors usually work on multiple projects, which involve different grad students, at the same time. Aditya Pakki could be working on a different project with Kangjie Lu, and not be involved with the problematic paper.
I used to work as an auditor. We were expected to conduct our audits to neither expect nor not expect instances of impropriety to exist. However, once we had grounds to suspect malfeasance, we were "on alert", and conduct tests accordingly.
This is a good principle that could be applied here. We could bat backwards and forwards about whether the other submissions were bogus, but the presumption must now be one of guilt rather than innocence.
Personally, I would have been furious and said, in no uncertain terms, that the university keep a low profile and STFU lest I be sufficiently provoked to taking actions that lead to someone's balls being handed to me on a plate.
I'm no lawyer, but it seems like there'd be something actionable.
On a side note, this brings into question any research written by any of the participating authors, ever. No more presumption of good faith.
Except that at least one of those three, did [0]. The author is incorrect that none of their attempts became git commits. Whatever process that they used to "check different versions of Linux and further confirmed that none of the incorrect patches was adopted" was insufficient.
[0] https://lore.kernel.org/patchwork/patch/1062098/
That doesn't appear to be one of the three patches from the "hypocrite commits" paper, which were reportedly submitted from pseudononymous gmail addresses. There are hundreds of other patches from UMN, many from Pakki[0], and some of those did contain bugs or were invalid[1], but there's currently no hard evidence that Pakki was deliberately making bad-faith commits--just the association of his advisor being one of the authors of the "hypocrite" paper.
[0] https://github.com/torvalds/linux/commits?author=pakki001@um...
[1] Including his most recent that was successfully applied: https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...
Unfair? Maybe: complain to your advisor.
Like you can go to any government building with a threat of bombs but claiming it is only an experiment to find security loophole.
https://adityapakki.github.io/experience/
In short "f** around, find out"
"We sought to probe vulnerabilities of the open-source public-development process, and our results include a methodology for getting an entire university's email domain banned from contributing."
Deleted Comment
Deleted Comment
What this professor is proving out is that open source and (likely, other) high trust networks cannot survive really mendacious participants, but perhaps by mistake, he's showing how important it is to make very harsh and public examples of said actors and their mendacity.
I wonder if some of these or other bug contributors have also complained that the culture of the project governance is too aggressive, that project leads can create an unsafe environment, and discourage people from contributing? If counter-intelligence prosecutors pull on this thread, I have no doubt it will lead to unravelling a much broader effort.
[1] https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...
This is overkill and uncalled for.
The foundation should use recourse to the law to signal they are handling it, if only to prevent these profs from being mobbed.
They claim that none of the Bogus patches were merged to the Stable code line :
>Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.
I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.
[1] : https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
[2] : https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
From the post:
"Yes, this wastes maintainers time, but we decided we didn't care."As someone not part of academia, how could this research be judged to not involve people? It _seems_ obvious to me that the entire premise is based around tricking/deceiving the kernel maintainers.
No one says "wasted their precious time" in a sincere apology. The word 'precious' here is exclusively used for sarcasm in the context of an apology, as it does not represent a specific technical term such as might appear in a gemology apology.
There IS a better solution: not to proceed with that "study" at all.
That is the perfect example of being arrogant
Couldn't figure out that "not doing it" was an option apparently.
I'm going to go with "both" here.
Dead Comment
Not sure how the researchers didn't see how this would backfire, but it's a hopeless misuse of their time. I feel really bad for the developers who now have to spend their time fixing shit that shouldn't even be there, just because someone wanted to write a paper and their peers didn't see any problems either. How broken is academia really?
The researchers should have approached the maintainers got get buy in, and setup a methodology where a maintainer would not interfere until a code merge was immanent, and just play referee in the mean time.
Deleted Comment
That's because those are two separate incidents. The study which resulted in 3 patches was completed some time last year, but this new round of patches is something else.
It's not clear whether the patches are coming from the same professor/group, but it seems like the author of these bogus patches is a Phd student working with the professor who conducted that study last year. So there is at least one connection.
EDIT: also, those 3 patches were supposedly submitted using a fake email address according to the "clarification" document released after the paper was published. So they probably didn't use a @umn.edu email at all.
Now a different researcher from UMN, Aditya Pakki, has submitted a patch which contains bugs that seems to be attempting to do the same type of pen testing although the PhD student denied it.
1. Section IV.A of the paper, as pointed out by user MzxgckZtNqX5i in this comment: https://news.ycombinator.com/item?id=26890872
> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.
2. Clarifications on the “hypocrite commit” work (FAQ)
https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
"* Does this project waste certain efforts of maintainers? Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study. However, to minimize the wasted time, (1) we made the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we tried hard to find three real bugs, and the patches ultimately contributed to fixing them."
With more than 60% of all acedemic publications not being reproducible [1], one would think academia has better things to do than wasting other people's time.
[1] https://en.wikipedia.org/wiki/Replication_crisis
[0] https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...
https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
See section VI-A (page 8).
>... learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel
And this sounds like mainly a lot of damage control is going to happen.
>We will report our findings back to the community as soon as practical.
Can someone who's more invested into kernel devel find them and analyze their impact? That sounds pretty interesting to me.
Edit: This is the patch reverting all commits from that mail domain: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
Edit 2: Now that the first responses to the reversion are trickling in, some merged patched were indeed discovered to be malicious, like the following. Most of them seem to be fine though or at least non malicious. https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...