Most of the time, the output isn't perfect, but it's good enough to keep moving forward. And since I’ve already written most of the code, Jules tends to follow my style. The final result isn’t just 100%, it’s more like 120%. Because of those little refactors and improvements I’d probably be too lazy to do if I were writing everything myself.
Recently, I refactored both the Go and Python versions to adopt Caffeine’s adaptive algorithm for improved hit ratio performance. But now that Otter v2 has switched to adaptive W-TinyLFU approach and more closely aligned with Caffeine’s implementation, I’m considering focusing more on the Python version.
This feels like a good time to do so: the Python community is actively working toward free-threading, and once the GIL is no longer a bottleneck, larger machines and multi-threads will become more viable. Then a high-performance, free-threading compatible caching libraries in Python will be important.