Readit News logoReadit News
oneepic · 4 years ago
In case anyone was confused like me, this appears to be describing a past bug from ~10 months ago, not an open one. (the blog post links to the bug also) https://bugs.chromium.org/p/project-zero/issues/detail?id=21...
perihelions · 4 years ago
geofft · 4 years ago
The commit messages are pretty unclear about whether there's any security impact:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

At least the second one says "... leading to use-after-free errors." But style in the Linux community is to not mention security impact and just to give a dense explanation of the bug itself. (Jann Horn, as a person who does care about security, tends to be better about this than most kernel developers; if the fix were from the average subsystem maintainer, I wouldn't expect to even see a mention of "use-after-free.")

Also, if you look at the Project Zero bug log (https://bugs.chromium.org/p/project-zero/issues/detail?id=21...), it's clear that Horn wasn't totally sure whether/how this could be exploited, just that it seemed funny.

(This should probably lead you to question whether "stable" kernels are a meaningful concept and whether the hypothesis that stable kernels are patched / otherwise do what they claim to do is even falsifiable.)

Datagenerator · 4 years ago
Thank you, security officer on duty goes back to sleep.
diegocg · 4 years ago
It is also a local exploit, not a remote one
Steltek · 4 years ago
At some point in my career, I picked up the notion that there an infinite number of local exploits laying around on your average Linux box. Any local user could find their way to root unless you took extra steps to lock things down. I'm not saying that there are still bash one-liners that give you a root prompt. Just that the "attack surface" of privileged binaries and kernel APIs is so enormous that there must be something to leverage. I don't mean to pick on anything unfairly but I figured a specially crafted filesystem or FUSE command would do the trick quite easily.

Is that still the case or am I just old?

ncmncm · 4 years ago
The distinction is moot. A remote exploit that gets you local execution combined with a local to root gets you remote to root, and game over.
amelius · 4 years ago
Can it be turned into a remote exploit through a web-browser? (Assuming the user knows what they are doing)
nyc_pizzadev · 4 years ago
A handful of C projects I have seen use magic numbers in allocated structs to prevent use-after-free and other memory bugs[0]. Basically, in this case, when the ref count hits zero and the struct is freed, the magic is zeroed and any further access will be stopped. The author makes no reference of this, so I guess this isn’t a widespread safety pattern?

[0] https://github.com/varnishcache/varnish-cache/blob/4ae73a5b1...

vlovich123 · 4 years ago
It’s possible some projects do this correctly but I suspect most have a false sense of security as the compiler will elide all stores that are happening in a struct about to be freed and there’s no C/C++/LLVM language that’s really immune from this [1].

Usually a more thorough approach is to turn on malloc scribbling, ASAN or valgrind which is something Darwin’s allocator can be told to do (it’ll scribble separate uninitialized and freed patterns).

I could see the appeal of there being a magic value though. I think that’s what memset_s is for so hopefully your favorite project is doing that properly.

[1] http://www.daemonology.net/blog/2014-09-04-how-to-zero-a-buf...

nyc_pizzadev · 4 years ago
> I suspect most have a false sense of security as the compiler will elide all stores that are happening in a struct about to be freed

The magic field is reset before returning the pointer to the allocator, so it’s definitely a live write to a valid pointer.

geofft · 4 years ago
This only protects you against unintentional use-after-free. If a use-after-free of struct ws is a thing you're worried about an attacker intentionally causing, in order for this to be useful, the attacker needs to control one of those four char * pointers and point them somewhere useful. Typically they'd do that by inducing the program to re-allocating the freed memory with a buffer under their control (like input from the network) and then filling it in with specific bytes.

If they can do that, they can very easily fill in the magic numbers too. It's even easier than pointers because it doesn't require inferring information about the running program - the magic number is the same across all instances of Varnish and right there in the source.

"Heap spray" attacks are a generalization of this where the attacker doesn't have precise enough control about what happens between the unwanted free and the reuse, but they can allocate a very large buffer (e.g., send a lot of data in from the network, or open a lot of connections) and put their data in that way. This approach would be basically perfect for defeating the "magic number" approach.

(The blog post itself has a discussion of a number of more advanced variants on the "magic number" approach - see the mention of "tagging pointers with a small number of bits that are checked against object metadata on access".)

tjalfi · 4 years ago
The Stratus recommendations for structure marking would be a bit harder to defeat.

Here's an excerpt from an old Stratus presentation[0] on writing robust software.

Add TYPE, SIZE, VERSION, and OWNER to data structure

TYPE: Unique number for each different structure

SIZE: in bytes

VERSION: Changed whenever structure declaration changes

OWNER: Unique ID of owner, must be independent of structure contents; can be UID, least significant bits of clock, etc.

[0] https://web.archive.org/web/20170303065858/http://ftp.stratu...

nyc_pizzadev · 4 years ago
> they can very easily fill in the magic numbers too

Right, recreating the magic does side step this defense.

The context for software security these days is defense in depth and not something like “total defense” anymore. In this case, the use of magics is more of a dev testing mechanism than a runtime protect, although it does provide great runtime protection. What this means is if you use magics with proper testing and load testing, errors should surface before you release.

tjalfi · 4 years ago
Multics and Stratus VOS are two operating systems that use structure marking[0].

[0] https://multicians.org/thvv/marking.html

pmarreck · 4 years ago
Solutions like this depend on the will, skill and ethics of the coder.

Better IMHO to design a language in such a way that dangerous errors like this are completely impossible.

(I mean... this is basically why I switched from Ruby to Elixir for web dev, eliminating an entire class of bugs... If the language itself doesn't provide an error-reduction feature, then you are reliant on other developers to "do the right thing" and lose any guarantees)

nottorp · 4 years ago
The title sounds like it's the end of the world. Reading, I see it's another local exploit. And long fixed to boot.

Can we stop having tabloid titles for technical matters?

tssva · 4 years ago
The actual title of the article is "How a simple Linux kernel memory corruption bug can lead to complete system compromise". Which is a much less tabloid title than the changed title here. It also more properly reflects the purpose of the article which isn't discussing the specific bug but how such bugs can be exploited and more importantly how to prevent such bugs from being exploited.
bruo · 4 years ago
This text is not a news report, it’s a technical one about this specific bug. It shows how the attack develops and suggest mitigations at the kernel development level.

The bug itself is small and it lead to a whole system compromise, and the title is very good to guide us to the point they are trying to make… memory corruption is a problem and that needs to be addressed at early stages even, even if the overhead seems not worth it.

rndgermandude · 4 years ago
I agree. The title is quite factual.

It would be nice if the article stated what's affected more clearly, and importantly, that patches were rolled out long ago for most distros.

rndgermandude · 4 years ago
Nothing is ever really just a local exploit. It's always also one half of a remote exploit chain...
r1ch · 4 years ago
I think the title is fine, it's showing how even the most simple memory safety bugs can be exploited to lead to system compromise. Not every submission has to be about something happening right now.
fsflover · 4 years ago
If you care about such attack vectors, consider security through isolation, which can be provided by Qubes OS: https://qubes-os.org.
alexfromapex · 4 years ago
Preventing bugs like this is where Rust would shine in the kernel
chc4 · 4 years ago
Rust would help with bugs like the initial memory unsafety. Half the blog post is about resilience even in the face of memory unsafety though, especially since the entire point is that there only has to be one bug, in any legacy subsystem, to exploit the entire kernel. Using Rust doesn't magically add any of those defense-in-depth mitigations and pessimistic sanity checks.
lmm · 4 years ago
Resilience is impossible in C-family languages, given undefined behaviour. Any defence-in-depth checks you add can only be triggered once you're already in an undefined state, so the compiler will helpfully strip them out (this has already happened in Linux and is the reason they build with -fno-delete-null-pointer-checks, but C compilers have very little appetite for broadening that kind of thing).
vlovich123 · 4 years ago
Did you read the last part of the article? It explicitly says that Rust (or some other kind of languages guarantees) would absolutely remove the need for more complex runtime measures. Additionally, such checks stop the exploit chain early before it’s able to pick up steam.
atoav · 4 years ago
But it might remove another hole in the swiss cheese security model, which can be worth it.
skavi · 4 years ago
Do we know what the performance impact of adding all these checks is?

Dead Comment

Koffiepoeder · 4 years ago
To be honest, the article makes much more nuanced suggestions to avoid these kind of bugs. I am not sure Rust would even have helped here, since the cause seemed to be a race condition because of an invalid lock being used. It might have been possible to avoid in Rust with an RwLock, but in this case that was also the fix for the original bug (using the correct lock). I have only looked into this bug report semi-thoroughly however, so I might be mistaken.
athrowaway3z · 4 years ago
I was typing a comment about how Rust wouldn't have made a difference for the race condition, but after reading through it again to be sure i'm now on the side that Rust would have errored on the original bug.

  spin_lock_irq(&tty->ctrl_lock);
  put_pid(real_tty->pgrp);
  real_tty->pgrp = get_pid(pgrp);
  spin_unlock_irq(&tty->ctrl_lock);
rustifying this would be

  let mut tty_lock = tty.ctrl_lock();
  put_pid(real_tty);
  real_tty.pgrp = get_pid(pgrp);
  std::mem::drop(tty_lock); 
Which would give an error that you are not allowed to mutate real_tty.pgrp.

nyanpasu64 · 4 years ago
The problem was that one of two threads locked the wrong lock before accessing a shared resource (when two threads read or write shared memory, both sides must acquire mutexes or it's useless), resulting in a data race.

Rust could prevent this issue by requiring that all non-exclusive accesses to the shared data acquire the mutex (and if you use Mutex<T> which wraps the data, you'll always acquire the right mutex). The & vs. &mut system can model exclusive access ("initialization/destruction functions that have exclusive access to the entire object and can access members without locking"). It doesn't help with RCU vs. refcounted references, or "non-RCU members are exclusively owned by thread performing teardown" or "RCU callback pending, non-RCU members are uninitialized" or "exclusive access to RCU-protected members granted to thread performing teardown, other members are uninitialized". And Rust is worse than C at working with uninitialized memory (Rust not using the same type for initialized vs. uninitialized memory/references is a feature, but uninitialized memory references are too awkward IMO).

alexgartrell · 4 years ago
I shared this perspective, but luckily my job is awesome and (in a routine 1:1!) Paul told me why it's less straightforward than I thought: https://paulmck.livejournal.com/62436.html

my takeaway was essentially that you get sweet perf wins from semantics that are hard to replicate with a type system that's also making really strong guarantees without making the code SUPER gross.

encryptluks2 · 4 years ago
Seriously, this comment and similar comments is why I'm pretty much convinced that Rust will be booted from the Linux kernel altogether.
ylyn · 4 years ago
If enough people want it, it will happen.