Could you share a bit more about your specific use case? That will help me explain how EloqKV can best support it.
Could you share a bit more about your specific use case? That will help me explain how EloqKV can best support it.
Redis Cluster is often thought of as a distributed database, but in reality it’s not truly distributed. It relies on a smart client to route queries to the correct shard—similar to how mongos works in MongoDB. This design means Redis Cluster cannot perform distributed transactions, and developers often need to use hashtags to manually place related data on the same shard.
EloqKV takes a different approach. It’s a natively distributed database with direct interconnects between nodes. You can connect to any node with a standard Redis client, and still read or write data that physically resides on other nodes. This architecture enables true distributed transactions across shards, fully supporting MULTI/EXEC and Lua scripts without special client logic or manual sharding workarounds.
Writing your own Redis-like interface is trivial, so tidis et al don't matter to me. Even with Redis you should write an interface so you can swap it out.
EloqKV, by contrast, is fully transactional. It supports distributed Lua, MULTI/EXEC, and even SQL-style BEGIN/COMMIT/ROLLBACK syntax. This means you get the transactional guarantees of a database with Redis-level read performance. Writes are slightly slower since EloqKV ensures durability, but in return you gain full ACID safety. Most importantly, you no longer need to worry about cache coherence issues between a Redis cache and a separate SQL database—EloqKV unifies them into a single, reliable system.
I believe object storage is shaping the future architecture of cloud databases. The first big shift happened in the data warehouse space, where we saw the move from Teradata and Greenplum to Snowflake accelerate around 2016. Snowflake’s adoption of object storage as its primary storage layer not only reduced costs but also unlocked true elasticity.
Now, we’re seeing a similar trend in the streaming world. If I recall correctly, Ursa was the first to GA an object-storage–based streaming service, with Kafka(WarpStream) and AutoMQ following afterward.
I also believe the next generation of OLTP databases will use object storage as their main storage layer. This blog post shares some insights into this trend and the unique challenges of implementing object storage correctly for OLTP workloads, which are much more latency-sensitive.
https://www.eloqdata.com/blog/2025/07/16/data-substrate-bene...
Dead Comment
It superficially sounds like a series of server processes fronting actual database servers, which sounds like another layer of partition vulnerability and points of failure. But I also had similar high level concerns about the complexity of FoundationDB, people seem satisfied as to the validity of that architecture.
I fail to see how one would scale underlying resources if the persistence is done by various storage systems, and you'd be subject to limitations of those persistence engines. That sounds like a suspect claim, like the "Data Substrate" is a big distributed cache that scales in front of delayed persistence to an actual database. Again, sounds like oodles of opportunities for failure.
"Data substrate draws inspirations from the canonical design of single-node relational database management systems (RDBMS)." Look, I don't think a good distributed database starts from a CA system and bolts on partition tolerance. I get you can get far with big nodes, fast networks, and sharding but ... I mean, they talk about B+ trees but a foundational aspect of Cassandra and other distributed databases is that B+ trees don't scale past a certain point, because there exist trees so deep/tall that even modern hardware chokes on it, and updating the B+ tree with new data gets harder as it gets bigger.
As others have said, I'll leave it to Aphyr to possibly explain.
I agree that introducing a middleware layer on top of databases adds more points of failure. EloqKV avoids this by integrating Compute, Logging, and Storage into a single system. In this setup, storage can be a database like Cassandra, but users will not access it directly; all requests go through EloqKV, which manages ACID transactions. EloqKV is responsible for handling crashes of the TxServer, LogServer, and Storage. You can think of the distributed Storage Engine just as a Disk in traditional DBMS. Obviously its failure will affect the system. But no more than hard disk failure. In fact, in a cloud, all disks (i.e. EBS) are actually distributed storage systems.
This situation is a rethinking of the Redis and MySQL combination, which suffers from similar issues. Both systems can fail independently, resulting in only eventual consistency. EloqKV aims to resolve this problem.
For more details, please check our scaling disks benchmark report.
https://www.eloqdata.com/blog/2024/08/25/benchmark-txlog#exp...
> For a distributed transaction with distributed storage and multiple nodes how does a transaction get coordinated?
EloqKV is a multi-writer system, similar to many distributed databases (FoundationDB, TiDB, CockroachDB), but we have a set of new innovations on transaction protocols, for example, the entire system operates without a single synchronization point, not even a global sequencer. We drew many inspirations from the Hekaton and the TicToc paper.
The use case is straightforward: each tenant has cached objects like: `cache:{tenant_id}:{object_id} → cached JSON/doc`
I also maintain a tag index to find all object IDs with a given tag: `tag:{tenant_id}:{tag} → set of object_ids (tag example: “pricing”, “profile”)`
When a tag changes (say “pricing”), I use a single Lua script to look up all object IDs in the tag set and then delete their cache entries in one atomic operation.