There's a certain romance to the idea of a random video that looks like colorful noise actually crashing or exploiting your device. It's basically the closest thing to "snowcrash" that we have.
It was really bad when the best way to play a video on a website was through the god damn adobe flash plugin - isn't it crazy thinking back on how that was just normal for so long? [1] [2]
People lament the loss of all those flash games and stuff but some of those sites were SKETCHY, not to mention ad networks with unchecked .swf "creatives"...
Oh yeah, I remember the "flash super cookie" being a well-known marketing technique.
You know what's really crazy? Before the Snowden revelations in 2013, something like 70% of the web was using http:// (including sites like Amazon and Facebook, IIRC - although checkout pages may have been secured). LetsEncrypt really did a great job of getting the Web past the starting line of basic security.
I'd hope for someone reverse engineering the brain (or a specific individual's brain), then figuring out exactly what incomprehensible colourful noise of a video to show you, and at the end you mysteriously know how to speak Cantonese.
Could a hack be possible by exploiting the GPU? Like making a 3D game scene that's actually encoding malware so when the GPU tries to render it you now have access to the system resources?
Don't know about a scene, but EC2-like access to virtual GPUs was rumored to be pretty dangerous, potentially even to the hardware itself (think something like changing the voltages via undocumented registers). The attack surface there is enormous. It was rumors, maybe someone here knows better.
You can target co-processors in general, e.g., here [1], thus I assume people do hack GPUs.
Generally, the better we become in introducing mitigations, the more expensive attacks become and attackers have bosses, budgets and deadlines. They will try to find other avenues to land on a target :-)
We do read about occasional vulnerabilities on phone GPUs, but I do have to wonder. Wouldn't the compartmentalization and difference in compute abilities between CPU and GPU inherently limit the scope to which a vulnerability on the GPU in a typical PC can exploit?
This is a pretty bad way to make an announcement. The abstract mentions iOS, Firefox, VLC and multiple Android devices.
1. At least 2 of those entries are software users CAN upgrade manually, but versions are hidden deep within the text. If users can take action that should be incredibly clear.
- I have been unable to quickly skim and find a version of Firefox has fixes for these.
- VLC has a fix for a use after free issue in 3.0.18, but I can't tell at a glance if other issues still persist.
2. Do we know anything about whether this is exploited in the wild or not? I can't tell from the abstract nor the conclusion.
The technical information is valuable, but anything actionable for users to mitigate effects is really hard to find :(.
[Edit] Ok, the disclosure and ethics subsection does mention that Apple, Mozilla and VLC have fixed these bugs in their latest releases, and Google and MediaTek are aware of the problems.
Those are fair questions but note that this is a research paper (and an excellent one at that), not a blog post meant for general audiences. The focus of the abstract and introduction sections is more higher level, on the contributions of the paper, instead of what it means for end users.
I fully agree it is a research paper and the subject matter is absolutely great! My complaint is the research paper hints at what may be affected in the abstract, that got picked up in the title, and then it got a challenging to find actionable details quickly for either an end user or someone only mildly interested in the topic.
It also liked to a github page which is a great place include the short non-research content. I think the github page was WIP at the time the article appeared on hn.
I know my attitude was critical, but I also tried to include that info in the message (even though I found the responsible disclosure subsection after posting my original comment). I also feel that info should be present on HN in one of the top comments for these kinds of articles (regardless of whether it's mine or someone else's).
The problem with a general-purpose fuzzer is that the H.264 format is complex - you'd end up with a lot of syntactically-incorrect files (which decoders would easily reject) whereas H26Forge is a specialized fuzzer that ends up with syntactically-correct but semantically-incorrect files, and that's how it finds actual vulns before the heat death of the universe.
Re Rust: the problem here is hardware-acceleration, as far as I can tell. Even if we had a pure Rust H.264 decoder, you'd probably still want to use whatever your hardware has to use overall fewer resources. The drivers might be the place to look, and there's some progress on that front in Android for example, but as things stand fuzzing like that is extremely valuable.
Isn't the whole claim to fame for AFL that it largely mitigates or avoids that problem by tracking branch coverage so it doesn't waste time permuting the input in ways that don't change the program behavior meaningfully?
AFL works by trying to modify bits and seeing what branches change direction. Arithmetic coding means this relationship desyncs almost instantly, so it’s hard to mutate into interesting test cases.
Yes, fuzzing does work on decoders. I can't remember how deep AFL managed to get but I do remember a flurry of crash bugs against our decoder when somebody first tried it
IMO any complicated protocol or format will be subject to crashing bugs because just verifying correct behavior is difficult.
To discover bugs, just build the xyz file yourself. People tend to use tools to generate content, and those tools generally don't make invalid content. That's a general problem with qa/verification.
I wonder if the iOS versions would have been tracked down if the researchers didn't have access to Corellium. I'm glad they did, since it sounds like a pretty nasty exploit you could trigger from almost any web page.
The iOS issues were found by directly playing generated videos on an actual iPhone with iOS 13.3. The kernel panics helped guide us on where to look in Ghidra. Corellium was helpful for kernel debugging, and testing newer versions of iOS. Without Corellium, kernel debugging may have been more painful.
People lament the loss of all those flash games and stuff but some of those sites were SKETCHY, not to mention ad networks with unchecked .swf "creatives"...
[1] http://phrack.org/issues/69/8.html#article
[2] http://phrack.org/issues/69/13.html
You know what's really crazy? Before the Snowden revelations in 2013, something like 70% of the web was using http:// (including sites like Amazon and Facebook, IIRC - although checkout pages may have been secured). LetsEncrypt really did a great job of getting the Web past the starting line of basic security.
Generally, the better we become in introducing mitigations, the more expensive attacks become and attackers have bosses, budgets and deadlines. They will try to find other avenues to land on a target :-)
[1]https://objectivebythesea.org/v5/talks/OBTS_v5_iBeer.pdf
1. At least 2 of those entries are software users CAN upgrade manually, but versions are hidden deep within the text. If users can take action that should be incredibly clear.
- I have been unable to quickly skim and find a version of Firefox has fixes for these.
- VLC has a fix for a use after free issue in 3.0.18, but I can't tell at a glance if other issues still persist.
2. Do we know anything about whether this is exploited in the wild or not? I can't tell from the abstract nor the conclusion.
The technical information is valuable, but anything actionable for users to mitigate effects is really hard to find :(.
[Edit] Ok, the disclosure and ethics subsection does mention that Apple, Mozilla and VLC have fixed these bugs in their latest releases, and Google and MediaTek are aware of the problems.
That's the bit I was hoping to find at a glance.
It also liked to a github page which is a great place include the short non-research content. I think the github page was WIP at the time the article appeared on hn.
I know my attitude was critical, but I also tried to include that info in the message (even though I found the responsible disclosure subsection after posting my original comment). I also feel that info should be present on HN in one of the top comments for these kinds of articles (regardless of whether it's mine or someone else's).
> discovered an out-of-bounds read that causes a crash of the Firefox GPU utility process and a user-visible information leak
:O
Is that why sometimes Firefox will go all wonky-blinky on random websites and eventually crash?
Re Rust: the problem here is hardware-acceleration, as far as I can tell. Even if we had a pure Rust H.264 decoder, you'd probably still want to use whatever your hardware has to use overall fewer resources. The drivers might be the place to look, and there's some progress on that front in Android for example, but as things stand fuzzing like that is extremely valuable.
To discover bugs, just build the xyz file yourself. People tend to use tools to generate content, and those tools generally don't make invalid content. That's a general problem with qa/verification.