(dang please don't ban me for a low-quality comment :) i couldn't resist but will not make it a habit!)
Sure GPT-3 can respond with factoids, but it doesn't actually understand anything. If I have a chat with the model and I ask it "what did we talk about thirty minutes ago" it's as clueless as anything. A few weeks ago computerphile put out a video of GPT-3 doing poetry that was allegedly only identified as computer generated half of the time, but if you actually read the poems they're just lyrically sounding word salad, as it does not at all understand what it's talking about.
Honestly the only expectations I have for this is generating a barrage of spam or fake news that uncritical readers can't distinguish from human output.
It is difficult to define AGI, and it is difficult to say what the remaining puzzle piece are, and so it's difficult to predict when it will happen. But I think the responsible thing is to treat near-term AGI as a real possibility, and prepare for it (this is the OpenAI charter we wrote two years ago: https://openai.com/charter/).
I do think what is clear is that we are, in the coming years, going to have very powerful tools that are not AGI but that still change a lot of new things. And that's great--we've been waiting long enough for a new tech platform.
You can view the demo at https://twitter.com/i/broadcasts/1OyKAYWPRrWKb starting around 29:00.
It's Sam Altman demoing a massive Open AI model that was trained on GitHub OSS repos using a Microsoft supercomputer. It's not Intellicode, but the host says that they're working on compressing the models to a size that could be feasible in Intellicode. The code model uses English-language comments, or simply function signatures, to generate entire functions. Pretty cool.
This is a massive, massive deal. For context, the reason GPT-3 apps took off over the past few months before ChatGPT went viral is because a) text-davinci-003 was released and was a significant performance increase and b) the cost was cut from $0.06/1k tokens to $0.02/1k tokens, which made consumer applications feasible without a large upfront cost.
A much better model and a 1/10th cost warps the economics completely to the point that it may be better than in-house finetuned LLMs.
I have no idea how OpenAI can make money on this. This has to be a loss-leader to lock out competitors before they even get off the ground.