Just use memcache for query cache if you have to. And only if you have to, because invalidation is hard. It's cheap, reliable, mature, fast, scalable, requires little understanding, has decent quality clients in most languages, is not stateful and available off the shelf in most cloud providers and works in-clusetr in kubernetes if you want to do it that way.
I can't find a use case for Redis that postgres or postgres+memcache isn't a simpler and/or superior solution.
Just to give you an idea how good memcache is, I think we had 9 billion requests across half a dozen nodes over a few years without a single process restart.
memcached clients also frequently uses ketama consistent hashing, so it is much easier to do load/clustering, being much simpler than redis clustering (sentinel, etc).
Mcrouter[1] is also great for scaling memcached.
dragonfly, garnet, and pogocache are other alternatives too.
[1]: https://github.com/facebook/mcrouter
redis i/o is multithreaded, it's just the command loop that's single-threaded. If all you're doing is SET and GET of individual key-value pairs, every time I've seen a redis instance run hot under that sort of load, the bottleneck was the network card, never the CPU.
I ... actually think scaling redis for simple k-v storage is already pretty easy so I dunno that that's much of a concern?
mcrouter ... damn I haven't thought about mcrouter in at least 10 years.