--- snip ---
There’s a lot of weird stuff in the C++ version that only really makes sense when you remember that this was made in flash first, and directly ported, warts and all. For example, maybe my worst programming habit is declaring temporary variables like i, j and k as members of each class, so that I didn’t have to declare them inside functions (which is annoying to do in flash for boring reasons). This led to some nasty and difficult to track down bugs, to say the least. In entity collision in particular, several functions will share the same i variable. Infinite loops are possible.
--- snip ---
This sounds so bad, and confirms my prejudice that gaming code is terrible.
the EU has settled for using US tech but just taxing the success with fines
If not, I'm not sure this brings more to the table than simple configuration changes that are rolled out through your next deployment, which should be frequent anyway, assuming you have continuous delivery.
When the first chess engines came out they only employed one of these: calculation. It wasn't until relatively recently that we had computer programs that could perform all of them. But it turns out that if you scale that up with enough compute you can achieve superhuman results with calculation alone.
It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance on general cognitive tasks even if there are things humans do which they can't.
The other thing I'd point out is that all language is essentially synthetic training data. Humans invented language as a way to transfer their internal thought processes to other humans. It makes sense that the process of thinking and the process of translating those thoughts into and out of language would be distinct.
After all, that's what Artificial General Intelligence would at least in part be about: finding and proving new math theorems, creating new poetry, making new scientific discoveries, etc.
There is even a new challenge that's been proposed: https://arcprize.org/blog/launch
> It makes sense that the process of thinking and the process of translating those thoughts into and out of language would be distinct
Yes, indeed. And LLMs seem to be very good at _simulating_ the translation of thought into language. They don't actually do it, at least not like humans do.
For example, you will find functions where the runtime value of parameters will change the return type (e.g. you get a list of things instead of one thing). So unless we want to throw out huge amounts of Python libraries (and the libraries are absolutely the best thing Python has going for it) then we have to accept that it’s not a very good statically type language experience.
The JS community on the other hand had adopted TypeScript very widely. JS libraries are often designed with typing in mind, so despite being weakly typed, the static type experience is actually very good.
Moreover, for many libraries there are types- libraries that add types to their interface, and more and more libraries have types to begin with.
Anyway just wanted to share that for me at least it's in practice not so bad as you make it sound if you follow some good processes.
Also, given that these videos are going to be reencoded, which is tremendously expensive, I feel that any optimization in this step is basically premature. Naively launching ffprobe 10,000 times is probably still less heavyweight than 1 reencode.