These articles answer a lot of the questions you’re raising. It’s in OpenAI’s interest to claim AGI hasn’t been achieved yet as long as it benefits them. They may be getting sweetheart deals on compute from Microsoft under the current arrangement. It’s possibly also beneficial for Microsoft, but I think they are in a somewhat different market as a compute provider rather than consumer.
OpenAI has been adding other compute providers, so this hurts Microsoft too, because OpenAI can use their already low pricing to underbid other compute providers who want the volume and access that being a provider at OpenAI’s scale would bring.
>The motte-and-bailey fallacy (named after the motte-and-bailey castle) is a form of argument and an informal fallacy where an arguer conflates two positions that share similarities: one modest and easy to defend (the "motte") and one much more controversial and harder to defend (the "bailey")
The bailey: 'we can build AGI, give us millions of dollars'.
The motte: '“I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things'
It is remarkable how often this happens. We have a collection of separate but related technologies leading to the conception of a more general technology that does it all. We then proceed to build a towering inferno of complexity that is no doubt more general but less useful in specific instances. At this point, we conclude that what is needed are specialized tools for the separate use cases, so we promptly break up the general technology into many parts. Lather rinse repeat.
Like always, people like him only say the things that help them reach their current goal. It doesn’t matter if there is any truth to what they say. Moving goalposts, hyperbolic rhetoric, manipulative marketing to reach a large audience on an emotion level is the name of the game
Elon Musk made this way of doing business popular and now every hotshot tech CEO does it…But I guess it works, so people will continue doing it since there are no repercussions.
I wish they could use a fraction of that money to give an interesting definition of intelligence, or fund research in neurology, cognition or psychology that could give insights to define intelligence.
I wonder how they test their product, but I bet they don't use scientists of other fields, like psychology or neuroscience?
It's also not a super profitable term, either. I'm already running Qwen3 coder on my laptop locally and I don't need any AI service. Just like that, the financial ambitions of AI have been snuffed out.
Some of us have seen these kinds of fads many, many times.
XML, Corba, Java, Startups, etc, etc.
Pump and dump.
Smart people collect the money from idealists.
AGI was always just a vehicle to collect more money. AI people will have to find a new way now.
So it would be in OpenAIs best interest to at least try to work and claim towards it
https://www.wired.com/story/microsoft-and-openais-agi-fight-... | https://archive.is/yvpfl
OpenAI has been adding other compute providers, so this hurts Microsoft too, because OpenAI can use their already low pricing to underbid other compute providers who want the volume and access that being a provider at OpenAI’s scale would bring.
https://www.cnbc.com/2025/07/16/openai-googles-cloud-chatgpt... | https://archive.is/HGgWf
Microsoft is already seeking to plan for the eventuality where OpenAI exercises the option and effectively declares AGI.
https://www.bloomberg.com/news/articles/2025-07-29/microsoft... | https://archive.is/mLEmC
>The motte-and-bailey fallacy (named after the motte-and-bailey castle) is a form of argument and an informal fallacy where an arguer conflates two positions that share similarities: one modest and easy to defend (the "motte") and one much more controversial and harder to defend (the "bailey")
The bailey: 'we can build AGI, give us millions of dollars'. The motte: '“I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things'
It's like living in an Escher painting.
Elon Musk made this way of doing business popular and now every hotshot tech CEO does it…But I guess it works, so people will continue doing it since there are no repercussions.
I wonder how they test their product, but I bet they don't use scientists of other fields, like psychology or neuroscience?