Asking someone to be a programmer in a loud chaotic open office environment is not dissimilar to asking them to program while juggling two balls and sitting on a unicycle. Its just excess difficulty that doesn't need to be added on top of the jobs.
Then after the fact we made up a bunch of bullshit as to why this is some brilliant idea. Then this idea spread as if it was some kind of technological advancement because it worked for small tech companies trying to not spend money on furniture and walls.
We just aren't very good at any of this at scale. The open "office" and battle against remote work are different flavors of the same type of stupidity.
A perfect machine designed to only string sentences together as perfect responses with no reasoning built it IS Indistinguishable from a machine that only builds sentences from pure reasoning.
Either way nobody understands what's going on in the human brain and nobody understands why LLMs work. You don't know. You're just stating a belief.
In a certain context that is only judging the output, what is meant by "play the saxophone", the model has achieved.
In another context of what is normally meant, the idea the model has learned to play the saxophone is completely ridiculous and not something anyone would even try to defend.
In the context of LLMs and intelligence/reasoning, I think we are mostly talking about the later and not the former.
"Maybe you don't have to blow throw a physical tube to make saxophone sounds, you can just train on tons of output of saxophone sounds then it is basically the same thing"
The enter discussion is ridiculous.