Regulation changes "why bother" to "oh crap".
Regulation changes "why bother" to "oh crap".
> Adding TLS in front of HTTP when talking to an untrusted third-party server, can only ever increase your attack surface.
No, against a MITM it instantly subtracts the surface inside the TLS from the equation. Which is the entire point.
> [...] that's why we have file signatures in the first place.
You still don't understand that even before the cryptographic operations done in order to verify the signatures you have all those other layers. Layers that are complex to implement, easy to misinterpret and repeatedly to this day found flawed. PGP is so terrible no serious cryptographer even bothers looking at it this day and age.
I start getting the feeling that you're involved in keeping the package repositories stuck in the past. I can't wait for yet another Apt bug where some MITM causes problems yet again.
I start getting the feeling that you have no actual experience in threat modelling.
There is fundamentally no reasonable threat model where a plaintext connection (involving all these previously listed protocols) is safer against a MITM than an encrypted and authenticated one.
Constantly ignoring all the flaws outlined and just reiterating your initial opinion with no basis whatsoever is at best ignorance, at worst trolling.
HTTP with signed packages is by definition a protocol with authenticated payloads, and encryption exclusively provides privacy. And no, we're not singeling out the least likely attack vector for the convenience of your argument - we're looking at the whole stack.
Where are all the new houses? I admit I am not a bleeding edge seeker when it comes to software consumption, but surely a 10x increase in the industry output would be noticeable to anyone?
To be honest, I think the surrounding paragraph lumps together all anti-AI sentiments.
For example, there is a big difference between "all AI output is slop" (which is objectively false) and "AI enables sloppy people to do sloppy work" (which is objectively true), and there's a whole spectrum.
What bugs me personally is not at all my own usage of these tools, but the increase in workload caused by other people using these tools to drown me in nonsensical garbage. In recent months, the extra workload has far exceeded my own productivity gains.
For the non-technical, imagine a hypochondriac using chatgpt to generate hundreds of pages of "health analysis" that they then hand to their doctor and expect a thorough read and opinion of, vs. the doctor using chatgpt for sparring on a particular issue.
> Having to harden two protocol implementations, vs. hardening just one of those.
We're speaking of a MITM here. In that case no, you don't have to harden both. (Even if you did have to, ain't nobody taking on OpenSSL before all the rest, it's not worth the effort.)
I find it kind-of weird that you can't understand that if all a MITM can tamper with is the TLS then it's irrefutably a significantly smaller surface than HTTP+PGP+Apt.
We are speaking of the total attack surface.
1. When it comes to injecting invalid packets to break a parser, you can MITM TLS without problem. This is identical to the types of attack you claimed were relevant to HTTP-only, feeding invalid data that would be rejected by authentication of the signature.
2. Any server owning a domain name can have a valid TLS certificate, creating "trusted" connections, no MITM necessary. Any server in your existing mirrorlist can go rogue, any website you randomly visit might be evil. They can send you both signed but evil TLS packets, and malicious HTTP payloads.
3. Even if the server is good, it's feeding you externally obtained data that too could be evil.
There is no threat model here where you do not rely 100% on the validity of the HTTP stack and file signature checking. TLS only adds another attack surface, by running more exploitable code on your machine, without taking away any vulnerabilities in what it protects.
Modern TLS stacks are far from fragile, especially in comparison to PGP implementations. It's a significant reduction in attack surface when it's a MITM we're talking about.
Malicious mirrors remain a problem, but having TLS in the mix doesn't make it more dangerous. Potential issues with PGP, HTTP and Apt's own logic are just so much more likely.
Adding TLS in front of HTTP when talking to an untrusted third-party server (and yes, any standard HTTPS server is untrusted int his context), can only ever increase your attack surface. The only scenario where it reduces the attack surface is if you are connected with certificate pinning to a trusted server implementation serving only trusted payloads, and neither is the case for a package repo - that's why we have file signatures in the first place.
Having to harden two protocol implementations, vs. hardening just one of those.
(Having set up letsencrypt to get a valid certificate does not mean that the server is not malicious.)
Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html
> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.
> An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.
You misunderstand: this means more attack surface.
The attacker can mess with the far more complex and fragile TLS stack, and any attacker controlling a server or server payload can also attack the HTTP stack.
Have you recently inspected who owns and operates every single mirror in the mirror list? None of these are trusted by you or by the distro, they're just random third parties - the trust is solely in the package and index signatures of the content they're mirroring.
I'm not suggesting not using HTTPS, but it just objectively wrong to consider it to have reduced your attack surface. At the same time most of its security guarantees are insufficient and useless for this particular task, so in this case the trade-off is solely privacy for complexity.
Assuming this all came through unencrypted HTTP:
- you're also trusting that the client's HTTP stack is parsing HTTP content correctly
- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.
> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.
> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.
Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).
> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.
> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.
The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.
I'm sure that image nerds would poke holes in it, but it seems to work pretty much exactly the way it does IRL.
The noise at high ISO is where it can get specific. Some manufacturers make cameras that actually do really well, at high ISO, and high shutter speed. This seems to reproduce a consumer DSLR.
Even on old, entry-level APS-C cameras, ISO1600 is normally very usable. What is rendered here at ISO1600 feels more like the "get the picture at any cost" levels of ISO, which on those limited cameras would be something like ISO6400+.
Heck, the original pictures (there is one for each aperture setting) are taken at ISO640 (Canon EOS 5D MarkII at 67mm)!
(Granted, many are too allergic to noise and end up missing a picture instead of just taking the noisy one which is a shame, but that's another story entirely.)