Readit News logoReadit News
arghwhat commented on Exposure Simulator   andersenimages.com/tutori... · Posted by u/sneela
ChrisMarshallNY · a day ago
That does a fairly good job.

I'm sure that image nerds would poke holes in it, but it seems to work pretty much exactly the way it does IRL.

The noise at high ISO is where it can get specific. Some manufacturers make cameras that actually do really well, at high ISO, and high shutter speed. This seems to reproduce a consumer DSLR.

arghwhat · a day ago
With the disclaimer that I am comparing to the memory of some entry-level cameras, I would still say that it's way too noisy.

Even on old, entry-level APS-C cameras, ISO1600 is normally very usable. What is rendered here at ISO1600 feels more like the "get the picture at any cost" levels of ISO, which on those limited cameras would be something like ISO6400+.

Heck, the original pictures (there is one for each aperture setting) are taken at ISO640 (Canon EOS 5D MarkII at 67mm)!

(Granted, many are too allergic to noise and end up missing a picture instead of just taking the noisy one which is a shame, but that's another story entirely.)

arghwhat commented on Europe's $24T Breakup with Visa and Mastercard Has Begun   europeanbusinessmagazine.... · Posted by u/NewCzech
ajb · 2 days ago
Each individual detail isn't difficult, the moat is dealing with a huge, huge, pile of them. But most of the details are driven by laws and regulations: of the entity in charge of those things decides it doesn't want you to have a moat any more, you've got a problem. If there's one thing the EU really does have, it's the capacity to revise regulations.
arghwhat · 2 days ago
Rather than a moat of details, it's first-mover advantage. Anyone can run a credit card network, but merchants and banks need to support them. Many others exist, but the issue is that they don't have widespread adoption. Solutions that work exist, which means the lesser supported alternative is not widely used, which again reduces reason for wider adoption...

Regulation changes "why bother" to "oh crap".

arghwhat commented on The RCE that AMD won't fix   mrbruh.com/amd/... · Posted by u/MrBruh
Avamander · 3 days ago
I have implemented parts of all three. I doubt you have.

> Adding TLS in front of HTTP when talking to an untrusted third-party server, can only ever increase your attack surface.

No, against a MITM it instantly subtracts the surface inside the TLS from the equation. Which is the entire point.

> [...] that's why we have file signatures in the first place.

You still don't understand that even before the cryptographic operations done in order to verify the signatures you have all those other layers. Layers that are complex to implement, easy to misinterpret and repeatedly to this day found flawed. PGP is so terrible no serious cryptographer even bothers looking at it this day and age.

I start getting the feeling that you're involved in keeping the package repositories stuck in the past. I can't wait for yet another Apt bug where some MITM causes problems yet again.

arghwhat · 2 days ago
> I start getting the feeling that you're involved in keeping the package repositories stuck in the past.

I start getting the feeling that you have no actual experience in threat modelling.

arghwhat commented on The RCE that AMD won't fix   mrbruh.com/amd/... · Posted by u/MrBruh
Avamander · 3 days ago
No, you want to move goalposts, but we're not speaking of some arbitrary "total attack surface". The article itself is also about a potential MITM. Then you list three cherry-picked cases, none of which actually touch upon the concerns that a plaintext connection introduces or exposes. Please stop, it's silly.

There is fundamentally no reasonable threat model where a plaintext connection (involving all these previously listed protocols) is safer against a MITM than an encrypted and authenticated one.

arghwhat · 2 days ago
You don't call it "cherry-picking" when a person lists fundamental flaws in your argument.

Constantly ignoring all the flaws outlined and just reiterating your initial opinion with no basis whatsoever is at best ignorance, at worst trolling.

HTTP with signed packages is by definition a protocol with authenticated payloads, and encryption exclusively provides privacy. And no, we're not singeling out the least likely attack vector for the convenience of your argument - we're looking at the whole stack.

arghwhat commented on Eight more months of agents   crawshaw.io/blog/eight-mo... · Posted by u/arrowsmith
entropyneur · 2 days ago
> I deeply appreciate hand-tool carpentry and mastery of the art, but people need houses and framing teams should obviously have skillsaws.

Where are all the new houses? I admit I am not a bleeding edge seeker when it comes to software consumption, but surely a 10x increase in the industry output would be noticeable to anyone?

arghwhat · 2 days ago
The real outcome is mostly a change in workflow and a reasonable increase in throughput. There might be a 10x or even 100x increase in creation of tiny tools or apps (yay to another 1000 budget assistant/egg timer/etc. apps on the app/play store), but hardly something one would notice.

To be honest, I think the surrounding paragraph lumps together all anti-AI sentiments.

For example, there is a big difference between "all AI output is slop" (which is objectively false) and "AI enables sloppy people to do sloppy work" (which is objectively true), and there's a whole spectrum.

What bugs me personally is not at all my own usage of these tools, but the increase in workload caused by other people using these tools to drown me in nonsensical garbage. In recent months, the extra workload has far exceeded my own productivity gains.

For the non-technical, imagine a hypochondriac using chatgpt to generate hundreds of pages of "health analysis" that they then hand to their doctor and expect a thorough read and opinion of, vs. the doctor using chatgpt for sparring on a particular issue.

arghwhat commented on The RCE that AMD won't fix   mrbruh.com/amd/... · Posted by u/MrBruh
Avamander · 5 days ago
TLS may be complicated for some people. But unlike HTTP, it has even formally proven correct implementations. You can't say the same about HTTP, PGP and Apt.

> Having to harden two protocol implementations, vs. hardening just one of those.

We're speaking of a MITM here. In that case no, you don't have to harden both. (Even if you did have to, ain't nobody taking on OpenSSL before all the rest, it's not worth the effort.)

I find it kind-of weird that you can't understand that if all a MITM can tamper with is the TLS then it's irrefutably a significantly smaller surface than HTTP+PGP+Apt.

arghwhat · 3 days ago
> We're speaking of a MITM here

We are speaking of the total attack surface.

1. When it comes to injecting invalid packets to break a parser, you can MITM TLS without problem. This is identical to the types of attack you claimed were relevant to HTTP-only, feeding invalid data that would be rejected by authentication of the signature.

2. Any server owning a domain name can have a valid TLS certificate, creating "trusted" connections, no MITM necessary. Any server in your existing mirrorlist can go rogue, any website you randomly visit might be evil. They can send you both signed but evil TLS packets, and malicious HTTP payloads.

3. Even if the server is good, it's feeding you externally obtained data that too could be evil.

There is no threat model here where you do not rely 100% on the validity of the HTTP stack and file signature checking. TLS only adds another attack surface, by running more exploitable code on your machine, without taking away any vulnerabilities in what it protects.

arghwhat commented on The RCE that AMD won't fix   mrbruh.com/amd/... · Posted by u/MrBruh
Avamander · 5 days ago
That was a long time ago and it was specific to one implementation. In comparison GnuPG has had so many critical vulnerabilities even recently. That's why Apt switched to Sequoia.

Modern TLS stacks are far from fragile, especially in comparison to PGP implementations. It's a significant reduction in attack surface when it's a MITM we're talking about.

Malicious mirrors remain a problem, but having TLS in the mix doesn't make it more dangerous. Potential issues with PGP, HTTP and Apt's own logic are just so much more likely.

arghwhat · 3 days ago
If you believe TLS is more fragile than PGP and plain HTTP, then I have reason to believe you have never looked at any of those wire protocols/file formats and the logic required.

Adding TLS in front of HTTP when talking to an untrusted third-party server (and yes, any standard HTTPS server is untrusted int his context), can only ever increase your attack surface. The only scenario where it reduces the attack surface is if you are connected with certificate pinning to a trusted server implementation serving only trusted payloads, and neither is the case for a package repo - that's why we have file signatures in the first place.

arghwhat commented on The RCE that AMD won't fix   mrbruh.com/amd/... · Posted by u/MrBruh
Avamander · 6 days ago
TLS stacks are generally significantly harder targets than HTTP ones. It's absolutely possible to use one incorrectly, but then we should also count all the ways you can misuse a HTTP, there are a lot more of those.
arghwhat · 5 days ago
This statement makes no sense, TLS is a complicated protocol with implementations having had massive fun and quite public security issues, while HTTPS means you have both and need to deal with a TLS server feeing you malicious HTTP responses.

Having to harden two protocol implementations, vs. hardening just one of those.

(Having set up letsencrypt to get a valid certificate does not mean that the server is not malicious.)

arghwhat commented on The RCE that AMD won't fix   mrbruh.com/amd/... · Posted by u/MrBruh
dns_snek · 6 days ago
> HTTP/1.1 alone is a trivial protocol

Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html

https://http1mustdie.com/

> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.

arghwhat · 5 days ago
You seem to have forgotten all the critical TLS bugs we had. Heartbleed ring a bell?

> An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.

You misunderstand: this means more attack surface.

The attacker can mess with the far more complex and fragile TLS stack, and any attacker controlling a server or server payload can also attack the HTTP stack.

Have you recently inspected who owns and operates every single mirror in the mirror list? None of these are trusted by you or by the distro, they're just random third parties - the trust is solely in the package and index signatures of the content they're mirroring.

I'm not suggesting not using HTTPS, but it just objectively wrong to consider it to have reduced your attack surface. At the same time most of its security guarantees are insufficient and useless for this particular task, so in this case the trade-off is solely privacy for complexity.

arghwhat commented on The RCE that AMD won't fix   mrbruh.com/amd/... · Posted by u/MrBruh
inetknght · 6 days ago
> For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key.

Assuming this all came through unencrypted HTTP:

- you're also trusting that the client's HTTP stack is parsing HTTP content correctly

- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.

arghwhat · 6 days ago
> you're also trusting that the client's HTTP stack is parsing HTTP content correctly

This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.

For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.

> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses

You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.

> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)

You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.

Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).

> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)

You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.

> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.

The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.

The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.

u/arghwhat

KarmaCake day8807January 12, 2017
About
no.
View Original