How that is in practice, I'm not sure, and I'm sure with some sleuthing it would be possible to find out at least some of it. But on the whole, I'm honestly not sure beyond that.
Edit: notice that I said "100%, or nearly so". I realize that 100% is an unrealistic metric for an LLM, but come on, the robots should be at least as competent as the humans they replace, and ideally much more so.
Source: One of the most classic internet websites, zombo.com (sound on)
Deleted Comment
Iteration speed trumps all in research, most of what Python does is launch GPU operations, if you're having slowdowns from Pythonland then you're doing something terribly wrong.
Python is an excellent (and yes, fast!) language for orchestrating and calling ML stuff. If C++ code is needed, call it as a module.
My experience has been that getting over the daunting factor of feeling afraid of a big wide world with a lot of noise and marketing and simply committing to a problem, learning it, and slowly bootstrapping it over time, tends to yield phenomenal results in the long run for most applications. And, if not, then there's often an applicable one/side field that can be pivoted to for still making immense/incredible progress.
The big players may have the advantage of scale, but there is so, so much that can be done still if you look around and keep a good feel for it. <3 :)
Both share equal credit I feel (also, the paper's co-authors!), both put in a lot of hard work for it, though I tend to bring up Bernstein since he tends to be pretty quiet about it himself.
(Source: am experienced speedrunner who's been in these circles for a decent amount of time)