1. How much data do you have and how many entries? If you have lots of data with very small records, you might need an off-heap based cache solution. The only ready-made implementation I know is Olric [1].
2. If you can use an on-heap cache, you might want to look at groupcache [2]. It's not "blazingly-fast", but it's battle-tested. Potential drawbacks include LRU eviction and lack of generics (meaning extra GC pressure from using `interface{}` for keys/values). It's also barely maintained, though you can find active forks on GitHub.
3. You could implement your own solution, though I doubt you'd want to go that route. Architecturally, segcache [3] looks interesting.
[1]: https://github.com/olric-data/olric
[2]: https://github.com/golang/groupcache
[3]: https://www.usenix.org/conference/nsdi21/presentation/yang-j...
I always wondered about this. How reliable is that in your experience? Thank you in advance.
That being said, query planning is generally where Oracle/MSSQl outshine MySQL/Postgres, especially for pruning unnecessary joins. BigQuery is great at it IME.