and yet...
Lots of questions on if this makes sense, and highly likely Amazon never gets $38B cash from OpenAI out of this.
The Github Copilot (in VS Code especially) is the only application of LLMs that I've found useful from Microsoft. I would have loved amazing Copilot support in Word for working on a large complex document, but I haven't found that to work well.
Is there a level of consciousness prior to language that willfully assembles the next word out of more subtle mind stuff? There would be an infinite regress here.
I think they are trying to learn more about it to see if there is something that can be done in cases where there are negative outcomes. Not where someone is alone and happy.