> the world owes Andres unlimited free beer. He just saved everybody’s arse in his spare time. I am not joking.
Agreed.
> and face significant amounts of online abuse from users of the software.
Much as I’d like this to change, I suspect it will not. I’ve been doing open-source work since the mid-90s, and have suffered abuse that would curl your toes, but I still do it, for reasons that would be obscure, for many folks, hereabout.
I think the onus is on the users of the software. It’s currently way too easy to make “LEGO block” software, by splicing together components, written by others, that we do not understand (which, in itself, is not necessarily bad. We’ve been working this way for centuries). I remember reading a report that listed YAML as one of the top programming languages.
If companies insist on always taking the easy way out, they continue to introduce a significant amount of risk. Since it’s currently possible to make huge buckets of money, by writing crap, we’re not incentivizing high-Quality engineering.
I don’t think there’s any easy answer, as a lot of the most effective solutions are cultural, as opposed to technical. Yesterday, someone posted a part of an old speech, by Hyman Rickover[0], that speaks to this.
If companies insist on always taking the easy way out, they continue to introduce a significant amount of risk.
This, each company needs to take some amount of responsibility for the stack that they use. A company that I worked for, sponsored upstream maintainers, sometimes for implementing specific functionality, sometimes without any specific goal. If more companies did this for their open source stacks, open source development would be much better funded.
Of course, it will always be hard to detect very sophisticated attacks from actors that can play the long game. But if maintenance is not just a hobby, PRs can receive more attention, and if it's not an added workload besides a day job, there is less churn/burn-out which can be used for malicious actors to take over a project.
"...each company needs to take some amount of responsibility for the stack that they use. A company that I worked for, sponsored upstream maintainers, sometimes for implementing specific functionality, sometimes without any specific goal. If more companies did this for their open source stacks, open source development would be much better funded."
Agreed, 100%. And yet: In a comment on Slashdot one guy said that his company had "thousands" of machines running Linux, and he was "proud" to have never paid a penny for it. I called him a parasite: open-source brings an immense value to his company, and he should support it.
My comment got a lot of hate, which I just don't understand. Sure, OSS licenses say you can do whatever you want. However, there is surely some ethical obligation to support something that your entire business depends on?
> This, each company needs to take some amount of responsibility for the stack that they use. A company that I worked for, sponsored upstream maintainers, sometimes for implementing specific functionality, sometimes without any specific goal. If more companies did this for their open source stacks, open source development would be much better funded.
This is good but I don’t think it’s enough to cover stuff like this which is deep in the stack and probably doesn’t have enough demand to warrant the trouble of direct funding (thinking of how many organizations make it easier to buy a $50k support contract than a $50 open source donation). I’ve been wondering if you could get companies to pay to something like the Linux Foundation, Apache, etc. for a general maintainer group tasked with basically taking something like Debian’s popcon and going down the list of things which aren’t well-established projects to pick up maintenance work.
One of the problems with the "money solution" in this case is that xz is a very small, relatively stable software. Sure things like the linux kernel, firefox, gnome or openssh could use huge donations to fund multiple developer day jobs for years.
But xz is small, it doesn't need to add a lot of new features constantly. It does need maintenance, but probably only like a couple hours each month – surely not enough to warrant a full-time day job. So what does the dev do to spend the other 90% of his time (and earn the other 90% of the money)? Some people don't like juggling multiple jobs (very stressy), some corporate jobs don't allow it, plus you've done nothing to reduce the Bus-factor (ideally any vital library should have 2,3,… people working on it, but who can take of just 5% of his day job to devote to open source maintenance?)
I think the 'easy' answer is liability, same as it is for any other complex human engineering achievement. Liability though would mean at the very least allowing commit access to only identified individuals and companies that are willing to pay for insurance coverage (to gain commit access).
This would probably ruffle too many feathers from the GNU old-timers, but I really dont see any other option. We are way past the tinkering-in-the-basement days of Linux/BSD hackers when most of us just wanted a cheap Unix box to play around with or to avoid Windows. A massive percentage of the civilian (and other) infrastructure is built on the shoulders of unpaid hobbyists. There is already massive liability at the social and corporate level. Time to deal with it.
EDIT: Ok, sounds like I have to describe this better: 1) you (governments) force commercial providers to assume liability for security issues and exploits and force disclosure, etc, 2) their insurance premiums go up, 3) to reduce premiums they only use checked/secured software, 4) that means maintainers of at least the critical pieces of software get paid via the (new) channel of risk reduction. Doesnt apply to all OSS, doesnt even apply to all distros. But it creates an audit trail and potentially actual compensation for maintainers.
Sounds like a good way to kill off open source entirely. This is luckily unlikely to happen.
As for throwing money at the maintainers, honestly, it’s complicated. A lot of people aren’t doing open source work for the money. Money too often comes with strings, requirements to prioritize what the funder wants to prioritize, pressure to perform consistently, it becomes an actual job.
Not only does this turn off a lot of the types of folks who make the best contributions to these projects, but it bends the priorities toward what would make the most money for the funder. And as this article points out, real security investments often fall by the wayside when profit is involved.
So yes, companies should encourage their workers to contribute to these projects, donate money to the foundations that fund them, hire important maintainers and give them carte blanche to work on open source. But we have to be careful. Making it all completely transactional is directly contradictory to what drives a lot of the contributions.
I wouldn't mind receiving insurance coverage (and the background check required to support it) IF YOU PAID ME TO DO SO!
But we (mostly) don't even pay open source developers to write the code ... who is offering to pay them for this insurance?
Besides, this was a highly sophisticated actor. Someone willing to create several layers of loaders and invest long amounts of time into getting xz excluded from certain checks. Anyone with such sophisticated spy craft could have fooled the insurance companies also.
Expecting liability coverage for source code people publish for free on their own time has very strong implications on free speech, freedom of arts, and freedom of science. I don’t think this is possible in a liberal society.
On the other hand, you can already buy software, where the vendor takes some kind of liability: just buy Windows, AIX, or one of the commercial Linux offerings. Same for software libraries: there are commercial offerings that come with (limited) liability from the vendor. It’s even possible to create software to stronger standards. But outside of some very specific industries (aerospace, automotive, nuclear, defense, …) there doesn’t seem to be a market for that.
This is an easy way that will achieve the goal of completely killing free software, destroying the entire software industry in the process.
I contribute stuff for fun, for free. Now I also have to PAY to do that??? Plus anyone can just steal my identity… I have to show my ID every time I sleep at a hotel. Hundreds of people have a copy of my id and could use it to open an account in my name online…
Do you guys ever read what you write? Did you stop to think about it for more than 0.3 seconds?
> If companies insist on always taking the easy way out, they continue to introduce a significant amount of risk. Since it’s currently possible to make huge buckets of money, by writing crap, we’re not incentivizing high-Quality engineering.
For the past 10-15 years there has been a strong culture of never writing code when a library on GitHub or NPM already exists, which has, in large part, contributed to this. In many cases using existing battle tested code is the right thing, but it was taken to an extreme where the avoidance of pulling down a bunch of random open source packages with questionable stability and longevity was often maligned as not invented here syndrome.
Now many of those packages are unmaintained, the job hopping SWEs who added them to the codebase are long gone, and the chickens are coming home to roost. The pendulum will probably swing too far in the other direction, but thoughtful companies will find the right balance.
“…comes as-is, without warranties and without any commitment for future work. Complaints will get your feature request deprioritised, may get you banned, and will look silly to any potential employer googling your name”.
Also, let’s make it a meme to call out unreasonable behaviour: stop Jigar-Kumaring!
Perhaps the situation would improve if it were easier/more normalised to offer to pay the core developer to fix the bug that affects you. If that were the case, it would boil down to put up or shut up.
This wouldn't be entirely without downside though, as there could be a risk that the project ends up getting steered by whoever has the most money, which may be at odds with what the broader community gets from the project. That's difficult to avoid whenever Open Source developers get paid, unfortunately. If it were limited to bug-fixes I think the risk would be slim. I'm not sure if any projects have tried this.
I think the abuse is unfixable. Being exposed to many people just exposes you to many strange people. It is like how celebrities get paranoid and try to not be seen, since they are magnets for strange people that recognize them.
Being pseudo-anonymous filters out alot of credible threats though.
I'm honestly really worried about Andres. He thwarted a very expensive operation by an state actor.
Also, this backdoor was discovered by sheer chance. How many of those backdoors are still to be discovered? We should be checking all source code present in a standard Linux server right now, but all I can see is complacency because this single attempt was caught.
> thwarted a very expensive operation by a state actor
From the article:
..the fix for this was already in train before the XZ issue was highlighted, and long before the Github issue. The fix stopped the XZ backdoor into SSH, but hadn’t yet rolled out into a release of systemd.
I believe there’s a good chance the threat actor realised this, and began rapidly accelerated development and deployment, hence publicly filing bug reports to try to get Ubuntu and such to upgrade XZ, as it was about to spoil several years of work. It also appears this is when they started making mistakes.
> How many of those backdoors are still to be discovered?
Since keeping such backdoor hidden in plain sight is extremely hard and required tons of preparation and social engineering spanning multiple projects, the answer is probably a function of number of those already discovered. As we don't discover years-old similar backdoors every now and then and had discovered this one pretty quickly, this might very well be the very first one that came this far.
Also, what's "sheer chance" for an individual is "enough eyeballs" for a collectivity.
I think the fact that it happened pretty much by chance means he's not more of a threat to any state actor now than before. It's not like he's suddenly the anti-chinese-backdoor guy because of this. Or maybe he is, but more in a funny infosec hall of fame kinda way. It won't be him saving us next time.
> I'm honestly really worried about Andres. He thwarted a very expensive operation by an state actor.
I don't think Andres is in serious danger, unless he is a persistent threat to the bad actors. It's true that we owe him big time for discovering the backdoor. But it could have been someone else before him. And it may be someone else the next time. Too much depends on chance for anyone to justify targeting him. They risk blowing their cover by doing that just to satisfy their ego.
Code quality has been plunging for years while we've become more dependent upon code.
Devin AI working Upwork jobs blew minds, but it succeeded for a reason. Upwork and similar sites are plagued by low quality contractors who do little more than glue together code written by better engineers. It was never a hard formula to copy.
Outsourcing programming work to the lowest cost and quality to third-party libraries is leading to inevitable results.
Obviously, the next leap will be sophisticated supply chain attacks based upon poisoning AI.
I think it’s even more fundamental than culture, there are so many moving parts nowadays that the vast majority just aren’t smart enough to write high quality software within a normal 40 hour a week job.
e.g. The difficulty curve for writing apps went down a bit when iOS became popular, but nowadays iOS 17 is probably more complex than Snow leopard was in 2009 so there aren’t any low hanging fruit left for the median SWE.
> Let’s just keep doing the good SBOM work at CISA, and stop doing stunts around Huawei and such — Huawei is a speck of dust compared to the issues around tens of thousands of unpaid developers writing the core of the world’s most critical infrastructure nowadays.
I have to disagree with this.
There seems to be this weird mindset in tech that because there's problem X (the Five Eyes countries hacking each others' citizens at each others' request, Meta collecting data on users, a massive attack using xz that almost got into the wild, etc.) that China isn't a problem. It's this strange "our house isn't in order because of our own doing, so it doesn't matter if some dude off the street starts squatting in it" idea.
If you have a country that has a tech legacy mainly related to espionage and attacks on other countries' systems - and make no mistake, that's China's main legacy - don't buy their stuff, no matter how many times it's said that it's fine. At some point it won't be fine.
You can fix that and better secure FOSS projects; it's not one or the other.
> Secondly, a core issue here is systemd in Linux. libsystemd is linked to all systemd services, which opens a rich attack surface of third party services to backdoor.
I guarantee you more services link to glibc, and that glibc has a much, much larger attack surface than anything systemd.
Also it's not like this attack would have been impossible if systemd hadn't done that. As soon as anything loaded xz it could pretty much do what it wanted. Slightly more of a pain but not difficult.
> libsystemd is linked to all systemd servies, which opens a rich attack surface of third party services to backdoor.
Many people have warned that systemd does way too much for an init system, and presents way too much attack surface. Cluck, cluck (chickens coming home to roost).
Firstly, it's not true: most systemd services don't link to libsystemd. It's useful for supporting the sd_notify feature, but plenty of things support it without using libsystemd.
Rich attack surface: there was like... four dependencies that aren't glibc, and irrespective of this backdoor attempt they were almost all already being made dlopen dependencies, which would've prevented this backdoor from working since it would've never been triggered for sshd. (Note, also, that linking libsystemd into sshd was a downstream patch.)
The truth is that libsystemd and systemd in general don't have ridiculous dependency trees or attack surfaces. Most likely the big reason why this backdoor was being pushed heavily to try to make it into Debian and Fedora releases was because the clock was ticking on the very small in they had found in the first place.
There are a lot of criticisms of systemd that I think are reasonable, but this really isn't one.
> (Note, also, that linking libsystemd into sshd was a downstream patch.)
I feel like a lot of people are really glossing over this point. What's happened here is that Red Hat and Debian have made a choice to patch OpenSSH in a way that opened it up to this attack.
It's a little ironic that e.g. Arch, which actually shipped the malicious code to end users since it publishes changes so much faster, as shipped never would have executed the payload (because they didn't patch OpenSSH).
So on the one hand libsystemd provides what both you and the manual page describe as "useful" functionality for implementing this feature, but on the other you're implying that Debian shouldn't have used it to do what it's designed for?
Maybe libsystemd shouldn't provide sd_notify then, if you're not supposed to use it?
While I agree generally about systemd, this line in particular is not even correct. You don’t have to link libsystemd to run as a systemd service. Using the notify feature is optional anyway, but you can do it without linking to that library.
Would an OpenBSD-style minimum functionality Unix philosophy designed init system stop a developer from taking over as maintainer of a different upstream project, allowing them to submit malicious patches?
The dependency is attributable, in the largest part, to systemd's neoplastic aggrandizement of userland infrastructure and associated plumbing, making this a distinction without much of a difference.
Its acceptable for avahi and pulseaudio because you don't need to use them.
With systemd, its a big problem because systemd is widely distributed. Systemd failed on its promise to create a new standard for defining services. Right now a ton of projects ship their own supervisor (runit, supervisord) or docker compose file to reduce their contact surface to systemd. Look what GitLab Omnibus does.
But with everything connected to systemd (udev, dbus), anything without systemd is sort of second class in terms of being tested. Ideally i would have the stability of Debian without the surprises from systemd. I tell people to "Press Ctrl+Alt+Del 7 times within two seconds" way too often.
This backdoor would've been caught eventually because the added latency is substantial. It wouldn't have occurred on Arch Linux, BSD, macOS, any Solaris derivative OS (Illumos etc).
We should be grateful it got caught so quickly. I sent Andres a honest thank you email. It isn't financial (and I am just one individual) but it felt the least I could do.
If there is a way to donate, I would. This person could've earned more via HackerOne or black market and instead went with an arguably better path. I don't think we can compete with the latter though, unless we start treating this for what it is: a weapon.
I am grateful too, but I don’t think that this person could have earned more via blackmarkets. This backdoor in ssh was NOBUS. You cannot exploit it unless you have the private key.
Like, what would be different if the software would be closed source and developers payed by companies? I think it would be at least as hard to notice such an exploit and sometimes it might be easier (if the company is located in your jurisdiction).
Maybe the current mindset of assembling programs could be improved. There is a trend in some architecture to separate everything in their own container and while I don't think it can be directly applied everywhere that model gives more separation for cases like this. Engineering is an art of trade-offs and maybe now we can afford making different trade-offs than 30 years ago (when some things where decided)
DJB’s qmail, written in 1995, was made of 5 distinct processes, owned by different users, with no trust among them. Coincidentally, it was the MTA with the best security record for more than a decade (and also the most efficient one).
It would have likely had a similar record even if it was only a monolithic process - because DJB - but it was built as 5 processes so even if one falls, the others do not.
The problem is most developers and companies simply don't care, or are even hostile to improvements ("this is not the Unix way"). We had SELinux for over two decades. We dan do even more powerful isolation than qmail could at the time, yet nobody outside Red Hat and Google (Android/ChromeOS) seems to be interested. Virtually all Linux distributions largely rely on a security model of the 70ies and a packaging model of the 90ies. This is compounded by one of the major distributions providing only 'community-supported security updates' for their largest package set (which most users don't seem to know), which unfortunately means that a lot of CVEs are not fixed. A weak security model plus outdated packages makes our infrastructure very vulnerable to nation state-funded attacks. The problem is much bigger than this compromise of xz. Hostile states probably have tens if not hundreds of backdoors and vulnerabilities that they can use in special cases (war, etc.).
It's endemic not just to open source. macOS has supported app sandboxing since Snow Leopard (2009), yet virtually no application outside the App Store (where sandboxing is mandatory) sandboxes itself. App sandboxing could stop both backdoors from supply chain compromises and vulnerabilities in applications in their tracks. Yet, developers put their users at risk and we as users cheer when a developer moves their application out of the App Store.
It's time for, not only better funding, but significantly better security than the 70ies/80ies Unix/Windows models.
it was also essentially unusable without a crapload of third party patches that DJB would not include into the master release, but yes it was quite secure :-)
I don't think this is necessarily true. People do a lot of reverse engineering of proprietary OSes and a lot of vulnerabilities are found that way (besides fuzzing). And the tooling for reverse engineering is only getting better.
Also, let's not forget that this particular backdoor was initially found through behavioral analysis, not by reading the source code. I think Linus' law "given enough eyeballs, all bugs are shallow" has been refuted. Discovering bugs does not scale linearly with eyeballs, you need the right eyeballs. And the right eyeballs are often costly.
If your implicit premise that having the source code available makes it easier to analyze code than closed source, you can also flip the argument around: it is easier for bad actors to find exploitable vulnerabilities because the source code is available.
(Note: I am a staunch supported of FLOSS, but for different reasons, such as empowerment and freedom.)
Google's Fuchsia OS looks promising but it doesn't look like (or something like it) will go anywhere until the world accepts that you probably have to build security into the fabric of everything. You can't just patch it on.
I think Google has given up on Fuchsia, outside some specific domains, right?
I think currently even Android, iOS, and ChromeOS have far better security than most desktop and server OSes. I think of the widely-used general purpose OSes, only macOS comes fairly close because it adopted a lot of things from iOS (fully verified boot, sealed system partition, app sandboxing for App Store apps, protected document/download/mail/... directories, etc.).
There isn’t much closed source software which is depended on as heavily as things like xz. The only one I can think of is Windows, which I think it’s safe to assume is definitely backdoored.
However the incentives even if a company detected the infiltration is to keep quiet about it. Lets say that a closed source "accessd" was backdoored, a database admin notices the new version (accessd 2024.6 SAAS version model 2.0+ with GPT!) is slower than the previous version, they put it down to enshittificaiton. Or they contact the company who has no incentive to spend money to look into it. There's no way the database admin (or whoever) can look into the git commit chain and see where the problem happened.
Gonna have a serious talk with my mother about not trying to hack into opensource software that powers most of the world's software. I know retirement is boring, but that's no excuse.
Some good observations here, including about the apparent acceleration of effort and ensuing sloppiness due to impending changes to systemd that would have prevented the particular attack vector.
Unfortunately, the sarcastic tone starts to become a barrier to separating signal from noise about halfway through. Okay, you’re super clever, the NSA is a threat, too, we get it. Security vendors are largely hucksters, fine. What were we talking about again?
Agreed.
> and face significant amounts of online abuse from users of the software.
Much as I’d like this to change, I suspect it will not. I’ve been doing open-source work since the mid-90s, and have suffered abuse that would curl your toes, but I still do it, for reasons that would be obscure, for many folks, hereabout.
I think the onus is on the users of the software. It’s currently way too easy to make “LEGO block” software, by splicing together components, written by others, that we do not understand (which, in itself, is not necessarily bad. We’ve been working this way for centuries). I remember reading a report that listed YAML as one of the top programming languages.
If companies insist on always taking the easy way out, they continue to introduce a significant amount of risk. Since it’s currently possible to make huge buckets of money, by writing crap, we’re not incentivizing high-Quality engineering.
I don’t think there’s any easy answer, as a lot of the most effective solutions are cultural, as opposed to technical. Yesterday, someone posted a part of an old speech, by Hyman Rickover[0], that speaks to this.
[0] https://news.ycombinator.com/item?id=39889072
This, each company needs to take some amount of responsibility for the stack that they use. A company that I worked for, sponsored upstream maintainers, sometimes for implementing specific functionality, sometimes without any specific goal. If more companies did this for their open source stacks, open source development would be much better funded.
Of course, it will always be hard to detect very sophisticated attacks from actors that can play the long game. But if maintenance is not just a hobby, PRs can receive more attention, and if it's not an added workload besides a day job, there is less churn/burn-out which can be used for malicious actors to take over a project.
Agreed, 100%. And yet: In a comment on Slashdot one guy said that his company had "thousands" of machines running Linux, and he was "proud" to have never paid a penny for it. I called him a parasite: open-source brings an immense value to his company, and he should support it.
My comment got a lot of hate, which I just don't understand. Sure, OSS licenses say you can do whatever you want. However, there is surely some ethical obligation to support something that your entire business depends on?
This is good but I don’t think it’s enough to cover stuff like this which is deep in the stack and probably doesn’t have enough demand to warrant the trouble of direct funding (thinking of how many organizations make it easier to buy a $50k support contract than a $50 open source donation). I’ve been wondering if you could get companies to pay to something like the Linux Foundation, Apache, etc. for a general maintainer group tasked with basically taking something like Debian’s popcon and going down the list of things which aren’t well-established projects to pick up maintenance work.
There could be a lot of variables to the rating, like the size of the team, the language, the testing methodology, etc.
The board would need to be supported by end-users, and big steps would need to be taken to ensure true independence.
It would have to be carefully done, though, and likely immensely unpopular with the HN crowd.
This would probably ruffle too many feathers from the GNU old-timers, but I really dont see any other option. We are way past the tinkering-in-the-basement days of Linux/BSD hackers when most of us just wanted a cheap Unix box to play around with or to avoid Windows. A massive percentage of the civilian (and other) infrastructure is built on the shoulders of unpaid hobbyists. There is already massive liability at the social and corporate level. Time to deal with it.
EDIT: Ok, sounds like I have to describe this better: 1) you (governments) force commercial providers to assume liability for security issues and exploits and force disclosure, etc, 2) their insurance premiums go up, 3) to reduce premiums they only use checked/secured software, 4) that means maintainers of at least the critical pieces of software get paid via the (new) channel of risk reduction. Doesnt apply to all OSS, doesnt even apply to all distros. But it creates an audit trail and potentially actual compensation for maintainers.
As for throwing money at the maintainers, honestly, it’s complicated. A lot of people aren’t doing open source work for the money. Money too often comes with strings, requirements to prioritize what the funder wants to prioritize, pressure to perform consistently, it becomes an actual job.
Not only does this turn off a lot of the types of folks who make the best contributions to these projects, but it bends the priorities toward what would make the most money for the funder. And as this article points out, real security investments often fall by the wayside when profit is involved.
So yes, companies should encourage their workers to contribute to these projects, donate money to the foundations that fund them, hire important maintainers and give them carte blanche to work on open source. But we have to be careful. Making it all completely transactional is directly contradictory to what drives a lot of the contributions.
But we (mostly) don't even pay open source developers to write the code ... who is offering to pay them for this insurance?
Besides, this was a highly sophisticated actor. Someone willing to create several layers of loaders and invest long amounts of time into getting xz excluded from certain checks. Anyone with such sophisticated spy craft could have fooled the insurance companies also.
On the other hand, you can already buy software, where the vendor takes some kind of liability: just buy Windows, AIX, or one of the commercial Linux offerings. Same for software libraries: there are commercial offerings that come with (limited) liability from the vendor. It’s even possible to create software to stronger standards. But outside of some very specific industries (aerospace, automotive, nuclear, defense, …) there doesn’t seem to be a market for that.
We’d still be using MS-DOS.
I contribute stuff for fun, for free. Now I also have to PAY to do that??? Plus anyone can just steal my identity… I have to show my ID every time I sleep at a hotel. Hundreds of people have a copy of my id and could use it to open an account in my name online…
Do you guys ever read what you write? Did you stop to think about it for more than 0.3 seconds?
For the past 10-15 years there has been a strong culture of never writing code when a library on GitHub or NPM already exists, which has, in large part, contributed to this. In many cases using existing battle tested code is the right thing, but it was taken to an extreme where the avoidance of pulling down a bunch of random open source packages with questionable stability and longevity was often maligned as not invented here syndrome.
Now many of those packages are unmaintained, the job hopping SWEs who added them to the codebase are long gone, and the chickens are coming home to roost. The pendulum will probably swing too far in the other direction, but thoughtful companies will find the right balance.
“…comes as-is, without warranties and without any commitment for future work. Complaints will get your feature request deprioritised, may get you banned, and will look silly to any potential employer googling your name”.
Also, let’s make it a meme to call out unreasonable behaviour: stop Jigar-Kumaring!
This wouldn't be entirely without downside though, as there could be a risk that the project ends up getting steered by whoever has the most money, which may be at odds with what the broader community gets from the project. That's difficult to avoid whenever Open Source developers get paid, unfortunately. If it were limited to bug-fixes I think the risk would be slim. I'm not sure if any projects have tried this.
Being pseudo-anonymous filters out alot of credible threats though.
Also, this backdoor was discovered by sheer chance. How many of those backdoors are still to be discovered? We should be checking all source code present in a standard Linux server right now, but all I can see is complacency because this single attempt was caught.
From the article:
Since keeping such backdoor hidden in plain sight is extremely hard and required tons of preparation and social engineering spanning multiple projects, the answer is probably a function of number of those already discovered. As we don't discover years-old similar backdoors every now and then and had discovered this one pretty quickly, this might very well be the very first one that came this far.
Also, what's "sheer chance" for an individual is "enough eyeballs" for a collectivity.
I don't think Andres is in serious danger, unless he is a persistent threat to the bad actors. It's true that we owe him big time for discovering the backdoor. But it could have been someone else before him. And it may be someone else the next time. Too much depends on chance for anyone to justify targeting him. They risk blowing their cover by doing that just to satisfy their ego.
Devin AI working Upwork jobs blew minds, but it succeeded for a reason. Upwork and similar sites are plagued by low quality contractors who do little more than glue together code written by better engineers. It was never a hard formula to copy.
Outsourcing programming work to the lowest cost and quality to third-party libraries is leading to inevitable results.
Obviously, the next leap will be sophisticated supply chain attacks based upon poisoning AI.
e.g. The difficulty curve for writing apps went down a bit when iOS became popular, but nowadays iOS 17 is probably more complex than Snow leopard was in 2009 so there aren’t any low hanging fruit left for the median SWE.
Deleted Comment
Deleted Comment
I have to disagree with this.
There seems to be this weird mindset in tech that because there's problem X (the Five Eyes countries hacking each others' citizens at each others' request, Meta collecting data on users, a massive attack using xz that almost got into the wild, etc.) that China isn't a problem. It's this strange "our house isn't in order because of our own doing, so it doesn't matter if some dude off the street starts squatting in it" idea.
If you have a country that has a tech legacy mainly related to espionage and attacks on other countries' systems - and make no mistake, that's China's main legacy - don't buy their stuff, no matter how many times it's said that it's fine. At some point it won't be fine.
You can fix that and better secure FOSS projects; it's not one or the other.
I guarantee you more services link to glibc, and that glibc has a much, much larger attack surface than anything systemd.
Many people have warned that systemd does way too much for an init system, and presents way too much attack surface. Cluck, cluck (chickens coming home to roost).
Firstly, it's not true: most systemd services don't link to libsystemd. It's useful for supporting the sd_notify feature, but plenty of things support it without using libsystemd.
Rich attack surface: there was like... four dependencies that aren't glibc, and irrespective of this backdoor attempt they were almost all already being made dlopen dependencies, which would've prevented this backdoor from working since it would've never been triggered for sshd. (Note, also, that linking libsystemd into sshd was a downstream patch.)
Seriously:
That's what you get today in NixOS unstable.The truth is that libsystemd and systemd in general don't have ridiculous dependency trees or attack surfaces. Most likely the big reason why this backdoor was being pushed heavily to try to make it into Debian and Fedora releases was because the clock was ticking on the very small in they had found in the first place.
There are a lot of criticisms of systemd that I think are reasonable, but this really isn't one.
I feel like a lot of people are really glossing over this point. What's happened here is that Red Hat and Debian have made a choice to patch OpenSSH in a way that opened it up to this attack.
It's a little ironic that e.g. Arch, which actually shipped the malicious code to end users since it publishes changes so much faster, as shipped never would have executed the payload (because they didn't patch OpenSSH).
Maybe libsystemd shouldn't provide sd_notify then, if you're not supposed to use it?
10000s lines of boiler plate code? Very easy to skip something in there. Why do we need to repeat everything?
The problem is exactly that.. preprocessing, build scripts etc.
Everything is a script and is therefore executable instead of a configuration file/statement which only increases the attack surface
Deleted Comment
I get the idea of systemd and I do think that the Unix principles were good for the 70s but maybe not so great today
Still in the true LP fashion, a lot of design decisions are not being thought with stability or longevity in mind
With systemd, its a big problem because systemd is widely distributed. Systemd failed on its promise to create a new standard for defining services. Right now a ton of projects ship their own supervisor (runit, supervisord) or docker compose file to reduce their contact surface to systemd. Look what GitLab Omnibus does.
But with everything connected to systemd (udev, dbus), anything without systemd is sort of second class in terms of being tested. Ideally i would have the stability of Debian without the surprises from systemd. I tell people to "Press Ctrl+Alt+Del 7 times within two seconds" way too often.
Your hastiness to hate on systemd made you forget what we're even talking about…
We should be grateful it got caught so quickly. I sent Andres a honest thank you email. It isn't financial (and I am just one individual) but it felt the least I could do.
If there is a way to donate, I would. This person could've earned more via HackerOne or black market and instead went with an arguably better path. I don't think we can compete with the latter though, unless we start treating this for what it is: a weapon.
Like, what would be different if the software would be closed source and developers payed by companies? I think it would be at least as hard to notice such an exploit and sometimes it might be easier (if the company is located in your jurisdiction).
Maybe the current mindset of assembling programs could be improved. There is a trend in some architecture to separate everything in their own container and while I don't think it can be directly applied everywhere that model gives more separation for cases like this. Engineering is an art of trade-offs and maybe now we can afford making different trade-offs than 30 years ago (when some things where decided)
DJB’s qmail, written in 1995, was made of 5 distinct processes, owned by different users, with no trust among them. Coincidentally, it was the MTA with the best security record for more than a decade (and also the most efficient one).
It would have likely had a similar record even if it was only a monolithic process - because DJB - but it was built as 5 processes so even if one falls, the others do not.
It's endemic not just to open source. macOS has supported app sandboxing since Snow Leopard (2009), yet virtually no application outside the App Store (where sandboxing is mandatory) sandboxes itself. App sandboxing could stop both backdoors from supply chain compromises and vulnerabilities in applications in their tracks. Yet, developers put their users at risk and we as users cheer when a developer moves their application out of the App Store.
It's time for, not only better funding, but significantly better security than the 70ies/80ies Unix/Windows models.
At least with open source we are able to detect them.
Also, let's not forget that this particular backdoor was initially found through behavioral analysis, not by reading the source code. I think Linus' law "given enough eyeballs, all bugs are shallow" has been refuted. Discovering bugs does not scale linearly with eyeballs, you need the right eyeballs. And the right eyeballs are often costly.
If your implicit premise that having the source code available makes it easier to analyze code than closed source, you can also flip the argument around: it is easier for bad actors to find exploitable vulnerabilities because the source code is available.
(Note: I am a staunch supported of FLOSS, but for different reasons, such as empowerment and freedom.)
Example: https://en.wikipedia.org/wiki/NSAKEY
I think currently even Android, iOS, and ChromeOS have far better security than most desktop and server OSes. I think of the widely-used general purpose OSes, only macOS comes fairly close because it adopted a lot of things from iOS (fully verified boot, sealed system partition, app sandboxing for App Store apps, protected document/download/mail/... directories, etc.).
However the incentives even if a company detected the infiltration is to keep quiet about it. Lets say that a closed source "accessd" was backdoored, a database admin notices the new version (accessd 2024.6 SAAS version model 2.0+ with GPT!) is slower than the previous version, they put it down to enshittificaiton. Or they contact the company who has no incentive to spend money to look into it. There's no way the database admin (or whoever) can look into the git commit chain and see where the problem happened.
[0] https://rigor-mortis.nmrc.org/@simplenomad/11218486968142017...
Unfortunately, the sarcastic tone starts to become a barrier to separating signal from noise about halfway through. Okay, you’re super clever, the NSA is a threat, too, we get it. Security vendors are largely hucksters, fine. What were we talking about again?