> socio-emotional capabilities of autonomous agents
The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/
That is the dehumanization process they are describing.
As opposed to what grandmasters actually did, which was less look ahead and more pattern matching to strengthen the position.
Now LLMs successfully leverage pattern matching, but interestingly it is still a kind of brute force pattern matching, requiring the statistical absorption of all available texts, far more than a human absorbs in a lifetime.
This enables the LLM to interpolate an answer from the structure of the absorbed texts with reasonable statistical relevance. This is still not quite “what humans do” as it still requires brute force statistical analysis of vast amounts of text to achieve pretty good results. For example training on all available Python sources in github and elsewhere (curated to avoid bad examples) yields pretty good results, not how a human would do it, but statistically likely to be pertinent and correct.