For another example, IntegriCloud: https://secure.integricloud.com/
Deleted Comment
For another example, IntegriCloud: https://secure.integricloud.com/
> **3. The Layer 7 Limitation** Cloudflare operates primarily at the application layer. Many failures happen deeper in the stack. Aggressive SYN floods, malformed packets, and protocol abuse strike the kernel before an HTTP request is even formed. If your defense relies on parsing HTTP, you have already lost the battle against L3/L4 attacks.
No idea how valid the video is. It could be accurate, it could be entirely simulated, it could be making some kind of simple mistake. (At least there’s a tiny bit more detail in the video description on Vimeo.) Anyway, good time to learn about the blanket “I’m under attack” mode and/or targeted rules.
> **2. The Origin IP Bypass** Cloudflare only protects traffic that proxies through them. If an attacker discovers your origin IP--or if you are running P2P nodes, validators, or RPC services that must expose a public IP--the edge is bypassed entirely. At that point, there is no WAF and no rate limiting. Your network interface is naked.
Revolutionary stuff.
Why should the rest of us be punished?
I guess if you take the old saying extremely literally, you could conclude that every fool is guaranteed to be parted with 100% of their lifetime available money regardless of what anyone else tries to do to stop that, but that’s not true – and why old sayings (with a respectable 75% of the words right) taken literally aren’t a good basis for decision-making.
Is there any proof of this? I'm interested in reading more about it.
> detect all burst errors up to 32 bits in size
What if errors are not consecutive bits?
E.g. f(x) = concat(xxhash32(x), 0xf00) is just as good at error detection as xxhash32 but is a terrible hash, and, as mentioned, CRC-32 is infinitely better at detecting certain types of errors than any universal hash family.
What bothers me with probability calculations, is that they always assume perfect uniformity. I've never seen any estimates how bias affects collision probability and how to modify the probability formula to account for non-perfect uniformity of a hash function.
Big scanned PDFs can be problfrom more efficient processing (if it had HW support for such technique)
> Your link shows CRC32 at 7963.20 MiB/s (~7.77 GiB/s) which indicates it's either very old or isn't measuring pure CRC32 throughput
It may not be fastest implementation of CRC32, but it's also done on old Ryzen 5 3350G 3.6GHz. Below the table are results done on different HW. On Intel i7-6820HQ CRC32 achieves 27.6 GB/s.
> measures 85 GB/s (GB, GiB, eh close enough) on the Apple M1. That's fast enough that I'm comfortable calling it limited by memory bandwidth on real-world systems.
That looks incredibly suspicious since Apple M1 has maximum memory bandwidth of 68.25 GB/s [1].
> I have personally seen bitrot and network transmission errors that were not caught by xxhash-type hash functions, but were caught by higher-level checksums. The performance properties of hash functions used for hash table keys make those same functions less appropriate for archival.
Your argument is meaningless without more details. xxhash supports 128 bits, which I doubt wouldn't be able to catch an error in you case.
SHA256 is an order of magnitude or more slower than non-cryptographic hashes. In my experience archival process usually has big enough effect on performance to care about it.
I'm beginning to suspect your primary reason for disliking xxhash is because it's not de facto standard like CRC or SHA. I agree that this is a big one, but you constantly imply like there's more to why xxhash is bad. Maybe my knowledge is lacking, care to explain? Why wouldn't 128 bit xxhash be more than enough for checksums of files. AFAIK the only thing it doesn't do is protect you against tampering.
> I don't know what kopia is, but according to your link it looks like their wire protocol involves each client downloading a complete index of the repository content, including a CAS identifier for every file. The semantics would be something like Git? Their list of supported algorithms looks reasonable (blake, sha2, sha3) so I wouldn't have the same concerns as I would if they were using xxhash or cityhash.
Kopia uses hashes for block level deduplication. What would be an issue, if they used 128 bit xxhash instead of 128 bit cryptographic hash like they do now (if we assume we don't need to protection from tampering)?
malicious block hash collisions where the colliding block was introduced by some way other than tampering (e.g. storing a file created by someone else)
Moreover, paper straws are not even recyclable due to water content which makes them soggy. Plastic ones are almost 100% recyclable
Most importantly, unlike plastic straws, they are laced with glue and other chemicals which gets ingested.
It’s more okay to make things out of paper than plastic, yes. Plastic waste and microplastics are a huge problem. Trees are a renewable resource.
> Moreover, paper straws are not even recyclable due to water content which makes them soggy. Plastic ones are almost 100% recyclable
Plastic straws are almost never (literally never?) recycled. Paper straws are supposed to be fully biodegradable.
> Most importantly, unlike plastic straws, they are laced with glue and other chemicals which gets ingested.
But yes, this and the usability issue make the other points moot (n.b. leaching harmful chemicals is a concern that also applies to plastic straws and paper cups). The vast majority of existing straws should be replaced with no straw, and most beyond that with reusable straws.
I agree the web UI should never be monitored using sentry. I can see why they would want it, but at the very least should be opt in.