Most of these efforts are relatively niche compared to Lucene because they never quite catch up in terms of features, scale, etc. and because the programmers involved are messing around on the fringes of the problem space instead of coming up with algorithmic breakthroughs, which at this point is the quickest way to make things faster. Well, that and cutting some major corners and pretending it's all the same by running some silly benchmark.
The fallacy here is confusing language, idioms, memory models, frameworks, etc. and assuming it's all set in stone. It isn't. Just because you are using Java does not mean everything has to be garbage collected, for example. Lucene actually uses memory mapped files, byte buffers, etc. for a lot of things. So, it does not actually need to do a lot of garbage collecting. It uses the same kinds of solutions you'd pick when using C. And they perform ballpark as you'd expect them too meaning that unless you improve the algorithms, you are not going to be magically a lot faster. The same is true of hotspot, the JVM's runtime compiler, which is written in C and, surprise, uses a lot of the same kind of trickery used by the fine people working on e.g. LLVM. So, a lot of things people assume must surely be slower just aren't and a lot of the insurmountable bottlenecks that people assume must surely be there always actually have well known ways of being worked around.
Of course there is always room for more optimization. Lucene is well over two decades old now and they still regularly come up with major performance improvements. There's nothing magical about how they do that; just a lot of hard work that goes into it that would be somewhat challenging to improve on just by switching language and compilers.
I would expect to see Java, C#, C, C++, and Rust mentioned quite a bit in the threads here. It's all relevant.