Readit News logoReadit News
minitech commented on When internal hostnames are leaked to the clown   rachelbythebay.com/w/2026... · Posted by u/zdw
stingraycharles · 6 days ago
I don’t understand. How could a GCP server access the private NAS?

I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.

minitech · 6 days ago
It couldn’t, but it tried.

Deleted Comment

minitech commented on Lennart Poettering, Christian Brauner founded a new company   amutable.com/about... · Posted by u/hornedhob
myaccountonhn · 14 days ago
It's interesting there's no remote attestation the other way around, making sure the server is not doing something to your data that you didn't approve of.
minitech · 14 days ago
There is. Signal uses it, for example. https://signal.org/blog/building-faster-oram/

For another example, IntegriCloud: https://secure.integricloud.com/

minitech commented on Cloudflare Can't Save You from a DoS (I Checked)   nullrabbit.ai/research/cl... · Posted by u/simonmorley
minitech · 15 days ago
AI slop? Most egregiously nonsense part:

> **3. The Layer 7 Limitation** Cloudflare operates primarily at the application layer. Many failures happen deeper in the stack. Aggressive SYN floods, malformed packets, and protocol abuse strike the kernel before an HTTP request is even formed. If your defense relies on parsing HTTP, you have already lost the battle against L3/L4 attacks.

No idea how valid the video is. It could be accurate, it could be entirely simulated, it could be making some kind of simple mistake. (At least there’s a tiny bit more detail in the video description on Vimeo.) Anyway, good time to learn about the blanket “I’m under attack” mode and/or targeted rules.

> **2. The Origin IP Bypass** Cloudflare only protects traffic that proxies through them. If an attacker discovers your origin IP--or if you are running P2P nodes, validators, or RPC services that must expose a public IP--the edge is bypassed entirely. At that point, there is no WAF and no rate limiting. Your network interface is naked.

Revolutionary stuff.

minitech commented on Google confirms 'high-friction' sideloading flow is coming to Android   androidauthority.com/goog... · Posted by u/_____k
hypercube33 · 17 days ago
The old saying goes a fool and their money is soon departed.

Why should the rest of us be punished?

minitech · 17 days ago
You pay a cost either way: live in a world with better funded and incentivized scammers and in a community less wealthy by a corresponding amount, or have a slightly less convenient sideloading experience.

I guess if you take the old saying extremely literally, you could conclude that every fool is guaranteed to be parted with 100% of their lifetime available money regardless of what anyone else tries to do to stop that, but that’s not true – and why old sayings (with a respectable 75% of the words right) taken literally aren’t a good basis for decision-making.

minitech commented on Libbbf: Bound Book Format, A high-performance container for comics and manga   github.com/ef1500/libbbf... · Posted by u/zdw
zigzag312 · 21 days ago
> Uniformity isn’t directly important for error detection.

Is there any proof of this? I'm interested in reading more about it.

> detect all burst errors up to 32 bits in size

What if errors are not consecutive bits?

minitech · 21 days ago
There’s a whole field’s worth of really cool stuff about error correction that I wish I knew a fraction of enough to give reading recommendations about, but my comment wasn’t that deep – it’s just that in hashes, you obviously care about distribution because that’s almost the entire point of non-cryptographic hashes, and in error correction you only care that x ≠ y implies f(x) ≠ f(y) with high probability, which is only directly related in the obvious way of making use of the output space (even though it’s probably indirectly related in some interesting subtler ways).

E.g. f(x) = concat(xxhash32(x), 0xf00) is just as good at error detection as xxhash32 but is a terrible hash, and, as mentioned, CRC-32 is infinitely better at detecting certain types of errors than any universal hash family.

minitech commented on Libbbf: Bound Book Format, A high-performance container for comics and manga   github.com/ef1500/libbbf... · Posted by u/zdw
zigzag312 · 21 days ago
From SMHasher test results quality of xxhash seems higher. It has less bias / higher uniformity that CRC.

What bothers me with probability calculations, is that they always assume perfect uniformity. I've never seen any estimates how bias affects collision probability and how to modify the probability formula to account for non-perfect uniformity of a hash function.

minitech · 21 days ago
Uniformity isn’t directly important for error detection. CRC-32 has the nice property that it’s guaranteed to detect all burst errors up to 32 bits in size, while hashes do that with probability at best 2^−b of course. (But it’s valid to care about detecting larger errors with higher probability, yes.)
minitech commented on Libbbf: Bound Book Format, A high-performance container for comics and manga   github.com/ef1500/libbbf... · Posted by u/zdw
zigzag312 · 21 days ago
> which can be done fast enough to appear instant on the CPU

Big scanned PDFs can be problfrom more efficient processing (if it had HW support for such technique)

> Your link shows CRC32 at 7963.20 MiB/s (~7.77 GiB/s) which indicates it's either very old or isn't measuring pure CRC32 throughput

It may not be fastest implementation of CRC32, but it's also done on old Ryzen 5 3350G 3.6GHz. Below the table are results done on different HW. On Intel i7-6820HQ CRC32 achieves 27.6 GB/s.

> measures 85 GB/s (GB, GiB, eh close enough) on the Apple M1. That's fast enough that I'm comfortable calling it limited by memory bandwidth on real-world systems.

That looks incredibly suspicious since Apple M1 has maximum memory bandwidth of 68.25 GB/s [1].

> I have personally seen bitrot and network transmission errors that were not caught by xxhash-type hash functions, but were caught by higher-level checksums. The performance properties of hash functions used for hash table keys make those same functions less appropriate for archival.

Your argument is meaningless without more details. xxhash supports 128 bits, which I doubt wouldn't be able to catch an error in you case.

SHA256 is an order of magnitude or more slower than non-cryptographic hashes. In my experience archival process usually has big enough effect on performance to care about it.

I'm beginning to suspect your primary reason for disliking xxhash is because it's not de facto standard like CRC or SHA. I agree that this is a big one, but you constantly imply like there's more to why xxhash is bad. Maybe my knowledge is lacking, care to explain? Why wouldn't 128 bit xxhash be more than enough for checksums of files. AFAIK the only thing it doesn't do is protect you against tampering.

> I don't know what kopia is, but according to your link it looks like their wire protocol involves each client downloading a complete index of the repository content, including a CAS identifier for every file. The semantics would be something like Git? Their list of supported algorithms looks reasonable (blake, sha2, sha3) so I wouldn't have the same concerns as I would if they were using xxhash or cityhash.

Kopia uses hashes for block level deduplication. What would be an issue, if they used 128 bit xxhash instead of 128 bit cryptographic hash like they do now (if we assume we don't need to protection from tampering)?

[1] https://en.wikipedia.org/wiki/Apple_M1

minitech · 21 days ago
> What would be an issue, if they used 128 bit xxhash instead of 128 bit cryptographic hash like they do now (if we assume we don't need to protection from tampering)?

malicious block hash collisions where the colliding block was introduced by some way other than tampering (e.g. storing a file created by someone else)

minitech commented on There's a ridiculous amount of tech in a disposable vape   blog.jgc.org/2026/01/ther... · Posted by u/abnercoimbre
piyushpr134 · a month ago
paper straws do not make any sense any way you look at it. Are we saying that we are okay to cut trees to make straws when we could make them out of petroleum ?

Moreover, paper straws are not even recyclable due to water content which makes them soggy. Plastic ones are almost 100% recyclable

Most importantly, unlike plastic straws, they are laced with glue and other chemicals which gets ingested.

minitech · a month ago
> Are we saying that we are okay to cut trees to make straws when we could make them out of petroleum ?

It’s more okay to make things out of paper than plastic, yes. Plastic waste and microplastics are a huge problem. Trees are a renewable resource.

> Moreover, paper straws are not even recyclable due to water content which makes them soggy. Plastic ones are almost 100% recyclable

Plastic straws are almost never (literally never?) recycled. Paper straws are supposed to be fully biodegradable.

> Most importantly, unlike plastic straws, they are laced with glue and other chemicals which gets ingested.

But yes, this and the usability issue make the other points moot (n.b. leaching harmful chemicals is a concern that also applies to plastic straws and paper cups). The vast majority of existing straws should be replaced with no straw, and most beyond that with reusable straws.

minitech commented on “Erdos problem #728 was solved more or less autonomously by AI”   mathstodon.xyz/@tao/11585... · Posted by u/cod1r
bytesandbits · a month ago
minitech · a month ago
That doesn’t look like a counterexample to “we formalize the statements by hand and inspect the proofs carefully to ensure they capture the full spirit of the problem”.

u/minitech

KarmaCake day2441February 17, 2015
About
24Ω snake and opinionated client-side JavaScript MVVM framework.
View Original