i've been documenting this theme in a twitter thread here https://twitter.com/dmvaldman/status/1358916558857269250
If you really probe at GPT, you'll see anything that goes beyond an initial sentence or two really starts to show how it's purely superficial in terms of understanding & intelligence; it's basically a really amazing version of Searle's Chinese room argument.
I also would contend that there is reasoning happening and that zero-shot demonstrates this. Specifically, reasoning about the intent of the prompt. The fact that you get this simply by building a general-purpose text model is a surprise to me.
Something I haven't seen yet is a model simulate the mind of the questioner, the way humans do, over time (minutes, days, years).
In 3 years, I'll ping you :) Already made a calendar reminder
many people have different definitions for AGI though. for me it clicked when i realized that text has this universality property of capturing any intent.
what i think is the issue is that we have a broadcasting machine (social media, news, etc) that works on sensationalism. so you are always hearing about fringe ideas and given no signaling to the size of the population that supports it.