Readit News logoReadit News
blueflow · 9 months ago
Because bash for some goddamn reason loads the bashrc for interactive shells AND when started by sshd, regardless of whether the shell is interactive or an tty is present. Bash (and only bash) literally has a special case for sshd to enable this kind of exploit.

As a result of this, git and rsync wont work at all if the bashrc on the remote machine writes any data to stdout. Like setting a window title.

To work around that, every bashrc on this earth needs a case switch to return early to avoid this specific bug.

yjftsjthsd-h · 9 months ago
Wait, ssh doesn't let you specify the remote command without running it through the user's shell? That seems like a deficiency too IMHO.
blueflow · 9 months ago
Command strings must be expanded to an argument vector by some kind of shell. SSH itself does not allow to execute a program by argument vector like execv.
nurettin · 9 months ago
It is simpler to export paths locally, so the remote doesn't have to know your file/folder structure.
pjc50 · 9 months ago
This is sort of a feature that allows for restricted shells such as menu systems.
kpcyrd · 9 months ago
It's a limitation in the ssh protocol. I wish they would fix it, but I'm not holding my breath. Trying to do anything about it would be a compatibility nightmare.

If you need to pass data through ssh you're better off doing it through stdin.

superb_dev · 9 months ago
> We can’t change git’s shell to /sbin/nologin or /bin/false, or users wouldn’t be able to connect over SSH.

Git actually has a solution for this! I don’t know if it would work with the custom python stuff going on, but you can set the login shell to `git-shell`

amelius · 9 months ago
Yeah, I tried that, but it doesn't work well with git-lfs (large file storage). At least, it didn't last time I tried.
asdffdasy · 9 months ago
So, it works perfectly to most sane use cases with git.
jbverschoor · 9 months ago
Or just use git over https. And heck, if that's such a big problem, switch to a vcs that you can properly manage
xenophonf · 9 months ago
You shouldn't use Git over HTTPS. With SSH, you can use a hardware authenticator that requires both proof of ownership (i.e., the unlock PIN) and proof of possession (i.e., physical touch) out of the box. That's technically possible over HTTPS, of course, but I have yet to see a Git server that works that way.
cadamsdotcom · 9 months ago
Nice writeup!

Thinking generally it seems something like the xz/lzma vulnerability could be snuck in by 1-2 nefarious people colluding across packaging and package producing, especially if we are talking about nation-state actors who can afford to look legit for years before being trusted to work without oversight - then when no one is watching, sneak in a backdoor.

I feel we are in a very innocent age and will look wistfully back at the days we trusted our anonymous open source brethren.

On macOS I think about this every time I “brew install”, and every time oh-my-zsh auto-updates. Do Linux users think about this?

tremon · 9 months ago
I think about this every time I install software. As a citizen of a small European nation, 100% of the software I use is under control of a foreign government, and I trust none of them. At least with open source software, there is a better chance of nefarious changes being detected by at least one of the parties building and packaging the software. With proprietary software, even that small level of assurance is not available.
akimbostrawman · 9 months ago
There is a very big risk difference between a upstream package like XZ and running random application with brew or god forbid zsh auto update.

>Do Linux users think about this?

Yes that's why i avoid packages not in the official distro repositories and where possible further minimize risks by using additional security layer such as sandboxing (flatpak and firejail), mandatory access control (Apparmor or SELinux), virtualization (KVM/Qemu) and application firewalls (Opensnitch).

mmh0000 · 9 months ago
Think about it all the time. But, I find it overwhelming and shutdown.

I'll just enumerate my random thoughts

* If you're worried about nations, remember that they can, quite literally, send ninjas-in-attack-helicopters at you.

* Most nations already have "laws" that "require" you to provide passwords/data access on demand[4ab].

* If you use any "cloud", there's a high likelihood that they're already backdoored, either through legal means (i.e. through National Security Letters) or "not so legal, but who's going to stop them?" means such as just plain old hacking[1]

* All consumer CPUs already have builtin backdoors that can't really (but kind of, but who really know if it's effective) be disabled[2abc]

* Most printers print secret codes on printed documents that link back to the printer[3]

* I have no control over device firmware and some important drivers. I really don't know what my network card firmware is doing when I'm not using it and it has DMA to my system RAM. I "need" nvidia proprietary drivers to have a decent experience, no idea what they actually do.

* Nearly every piece of software includes some form of "Analytics" or "Telemetry", which often doesn't actually turn off when you click the stupid opt-out button.

[1] https://www.npr.org/sections/thetwo-way/2013/10/30/241855353...

[2a] https://en.wikipedia.org/wiki/Intel_Management_Engine

[2b] https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...

[2c] https://en.wikipedia.org/wiki/ARM_architecture_family#TrustZ...

[3] https://en.wikipedia.org/wiki/Printer_tracking_dots

[4a] https://en.wikipedia.org/wiki/Key_disclosure_law [4b] https://www.eff.org/wp/digital-privacy-us-border-2017

registeredcorn · 9 months ago
> I feel we are in a very innocent age and will look wistfully back at the days we trusted our anonymous open source brethren.

There seems to be some kind of inflection point of things:

* Enough users

* Enough time

* Enough cost

* Enough financial reward

Eventually, as those things culminate together, there is a loss of the shared value and...innocence(?) of any scene.

I think back to when RMS was aghast at the idea of MIT adding usernames and passwords to their machines in 1977. [1] It's a battle that I still feel bad that he lost against; a kind of presumption of universal access is a thing that I am intrinsically drawn to. It breaks my heart to think that any sort of restriction exists to technological access, in spite of the obvious need for computer security. Think about it - in the modern day, it's not only unthinkable, it's largely impossible to access most computers without some form of locked down authentication - even for publicly accessible computers at libraries! Most Operating System setups are designed with a fundamental assumption that you will be using both, regardless of whether it's a computer shared by 50 people in an office, or a personal laptop in your home, locked room that no other person will ever touch or see.

I sit here and I think about how there was a period where it was simply understood that, you treat computers and that network with a kind of prestige and respect. That you conduct yourself and the things you do in a manner that is good, because you have a desire for that thing to continue being used, accessible, and enjoyable by other people who have that same inner passion - that hunger! That, there did not need to be a threat of some kind of legal consequence - no laws even existed in regards to it. You did so...merely because you grasped that it was right, and you desired to do what was right. You did things right because you had a passion, a care, and a shared common interest among all of those other people to use those systems in a way that was good for all.

Sure, perhaps there were a few jokes and gags. A few fights or arguments over this or that. Plenty of "frivolous" things that could be done like play games and whatnot, but there was a kind of obvious social understanding that you did not do what is bad because it was bad. You wouldn't do bad things for the same reason why you wouldn't stop in the middle of a sidewalk and start using the bathroom in public - because its unacceptable to do so; it was inconsiderate to do so.

I feel like there is a flower of naivety that has wilted with time. It's not quite dead, but so many of its petals have dried up, fallen off, and crushed beneath the boot of financial incentive.

I imagine it to be similar to how the invention of the car must have felt. So much early optimism and obvious benefits. Only to be used to escape police. To transport booze and drugs. To steal from owners so it can be sold for parts. To kidnap children in. To hit pedestrians with. To use in wartime. It's a grim bleakness in life, that all things that can be cherished and enjoyed by the kindhearted, so horribly abused by the malcontent.

[1] https://en.wikipedia.org/wiki/Richard_Stallman#Harvard_Unive...

lrvick · 9 months ago
For those looking for alternatives to the status quo on Linux supply chain security, check out [Stageˣ].

It is 100% deterministic, hermetic, and reproducible. Also it is full source bootstrapped from 180 bytes of human-auditable machine code, all released artifacts are multi-party reproduced/reviewed/signed, and it is fully container-native all the way down "FROM scratch" making it easy to pin dependency hashes to reproduce your own projects with it.

I started it after years of unsuccessful pleading with existing distros to stop giving ultimate solitary trust to -any- maintainers or sysadmins involved in the project.

https://codeberg.org/stagex/stagex

green7ea · 9 months ago
This project looks amazing — I didn't think bootstrapping like this was possible. Kudos on the project :-).

This might displace chainguard as my goto docker images :-).

noman-land · 9 months ago
Pardon my naivete but I've heard Nix described in many of the same terms. What are the differences and similarities between Stageˣ and Nix/NixOS?
lrvick · 9 months ago
I am obviously pretty biased having authored a failed RFC to Nix to mandate signing, and having founded StageX, but here goes.

Unlike StageX, Nix has a wikipedia-style low friction approach to packaging allowing a large community to maintain a huge number of packages which are signed by a central trusted party, while being -mostly- reproducible. It relies on a custom language and toolchain for packaging. Nix is designed for hobbyists seeking to reproduce their own workstations.

Unlike Nix, StageX is 100% reproducible and all changes are independently signed by authors and reviewers, then artifacts are reproduced and signed by multiple distributed maintainers. It is also full source bootstrapped, and is built on the broadly compatible OCI standard for packaging. StageX is designed for production threat models where only compiler and infrastructure toolchains are needed and trusting any single person is unacceptable.

One maintainer even uses Nix as a workstation to contribute to StageX. They have fundamentally different goals and use cases.

ufo · 9 months ago
I've got to say, Git resignifying -- and requiring --end-of-options instead is bonkers
musicale · 9 months ago
The pathway of untrusted/malicious input -> trusted command line argument seems to be a common problem, and one that could possibly be mitigated by better type/taint checking.

It looks like there is some prior work in this area, but it hasn't resulted in commonly available implementations (even something basic like a type/taint checking version of exec() etc. on one side and getopt() etc. on the other.)

PhilipRoman · 9 months ago
I could've sworn I remember something about bash and glibc cooperating to indicate which arguments come from expanded variables but I cannot find anything on the internet or in the sources. Either I'm going insane or it was an unmerged proposal.
pests · 9 months ago
Sadly due to legacy.

—- disambiguates revisions and paths in some commands so another option was needed.

immibis · 9 months ago
Just write all commands with structured I/O instead, like Powershell.

Now I want an operating system where everything is a YANG model...

INTPenis · 9 months ago
>In addition, this is a self-service application, in the sense that anyone can create a Fedora contributor account and gain authenticated access to various services.

I legitimately wanted to get a package into Fedora a few years ago, a service that did not exist already, and I couldn't get past the fact that they require new contributor accounts to be sponsored by someone already a contributor. I was unable to secure sponsorship by anyone and just gave up.

pluto_modadic · 9 months ago
this could be a useful mechanism actually... shame it didn't work out
ForOldHack · 9 months ago
Then you have someone to blame for sponsoring an exploit vs some unsponsored person who identified the same exploit patched it, and no one bothered to check either way. I guess I got lucky never trusting Mint. I did trust Slackware, because I knew someone who trusted Pat V. I guess I also need to rub shoulders with Linux maintainers vs people like Marc Andressen. I did meet RMS, but he has no personality. Woz? Wolfram? Anna V? I met the SUSE people, and one of the RedHat maintainers, who gave me a tee-shirt, a poster, two bumper stickers, and the latest distribution of RedHat. It was easier a decade ago...

Despite the amount of brilliance here, again, we never have had a single meet and greet.

INTPenis · 9 months ago
Agreed, I have been a RHEL ecosystem user for 11 years now. My experience in the Fedora community was actually comforting to me.
guillem_lefait · 9 months ago
NoLimitSecu, French cybersecurity podcast, released an episode yesterday with the authors: https://www.nolimitsecu.fr/compromission-de-distributions-li...

It was amazing to hear that they chose the weakest path, argument injection and were able to found a vector in two weeks twice (fedora + opensuse).

b112 · 9 months ago
Does anyone else have the title overlay taking up 2/7ths of the top of the screen?