As an online IDE they would have had a chance. But when they pivoted into AI, they decided to enter a highly crowded place with very strong players.
Recently OpenAI and now Anthropic are developing mobile clients as well:
https://www.testingcatalog.com/anthropic-prepares-claude-cod...
Now there's lots of variables that can be tweaked on this. So it's possible to get it to work. But there's a lot less room for error.
Go on something like openrouter with gpt 5.1 and use the chat then check the billing and you’ll see an average joe query is like 0.00102 or something.
You’re quoting figures from articles for initial ChatGPT release in 2022