robots.txt is not a blocking mechanism; it's a hint to indicate which parts of a site might be of interest to indexing.
People started using robots.txt to lie and declare things like no part of their site is interesting, and so of course that gets ignored.
Edit: And, btw, that statement was true before the default was changed. So, your comment is doubly false.
Atomicity: Yes, for individual operations. Each key-value operation is atomic (it either completes fully or not at all).
Consistency: Partial. We ensure data validity through our conflict resolution strategies, but we don't support multi-key constraints or referential integrity.
Isolation: Limited. Operations on individual keys are isolated, but we don't provide transaction isolation levels across multiple keys.
Durability: Yes. Our persistence model allows for tunable durability guarantees with corresponding performance trade-offs.
So while we provide strong guarantees for individual operations, HPKV is not a full ACID-compliant database system. We've optimized for high-performance key-value operations with practical durability assurances rather than complete ACID semantics.
That's not what consistency means in ACID.
> Durability: Yes. Our persistence model allows for tunable durability guarantees with corresponding performance trade-offs.
> ~600ns p50 for writes with disk persistence
I'm pretty sure there's no durability there. That statement is pretty disingenuous in itself, but it'd be nice to see a number for durability (which, granted, is not something you advertise the product for).
My main concern is that all these speed benefits are going to be eclipsed by the 0.5ms of network latency.