"Whether Sora lasts or not, however, is somewhat beside the point. What catches my attention most is that OpenAI released this app in the first place.
It wasn’t that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb , and many commentators took Dario Amodei at his word when he proclaimed 50% of white collar jobs might soon be automated by LLM-based tools."
That's the thing, this has all been predicated on the notion that AGI is next. That's what the money is chasing, why it's sucked in astronomical investments. It's cool, but that's not why Nvidia is a multi trillion dollar company. It's that value because it was promised to be the brainpower behind AGI.
What signals have you seen that point to investment being predicated around AGI? Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts. That's a much more "sober" outlook than AGI.
In fact a fun thing to think about is what signals we could observe in markets that specifically call out AGI as the expectation as opposed to simple bullish outlook on inference usage.
"Boosting Nvidia stock prices could also be explained by an expectation of increased inference usage by office workers which increases demands for GPUs and justifies datacenter buildouts"
AI is already integrated into every single Google search, as well as Slack, Notion, Teams, Microsoft Office, Google Docs, Zoom, Google Meet, Figma, Hubspot, Zendesk, Freshdesk, Intercom, Basecamp, Evernote, Dropbox, Salesforce, Canva, Photoshop, Airtable, Gmail, LinkedIn, Shopify, Asana, Trello, Monday.com, ClickUp, Miro, Confluence, Jira, GitHub, Linear, Docusign, Workday
.....so where is this 100X increase in inference demand going to come from?
I think the motivation for someone like Altman is not AGI, it's power and influence.
And when he wields billions he has power, it doesn't really matter if there's AGI coming.
Yep, he just wants to become too big to fail at this point.
I view OpenAI like a pyramid scheme: taking in increasing amounts of money to pursuit ever growing promisses that can be dangled like a carrot to the next investor.
If you owe investors $100 million, that's your problem. If you owe investors $100 billion, that's their problem.
Porn has driven everyday tech. Online payment systems, broadband adoption.
Porn (visual and written erotic impression) has been a normal part of the human experience for thousands of years. Across different religions, cultures, technological capabilities. We're humans.
There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.
Generate your own porn is definitely a huge market. Sharing it with others, and then the follow-on concern of what's in that shared content, could lead to problems.
To be very fair here, a long time before gpt-5 porn was already being produced with stable diffusion (and other open models). Civitai in particular was an open playground for this with everything from NSFW loras, prompts to fined tuned models.
I had to work for a bit with SDXL models from there and the amount of porn on the site, before the recent cleanse, was astonishing.
I apologise for talking past the point you're making, but, Bob Ross was a human being, you know, with thoughts and stuff. How could any of these AI toys possibly compare?
I would love to have Bob Ross, wielding a crayon, add some happy little trees to the walls of a Target.
This take feels like classic Cal Newport pattern-matching: something looks vaguely "consumerish," so it must signal decline. It's a huge overreach.
Whether OpenAI becomes a truly massive, world-defining company is an open question, but it's not going to be decided by Sora. Treating a research-facing video generator as if it's OpenAI's attempt at the next TikTok is just missing the forest for the trees. Sora isn't a product bet, it's a technology demo or a testbed for video and image modeling. They threw a basic interface on top so people could actually use it. If they shut that interface down tomorrow, it wouldn't change a thing about the underlying progress in generative modeling.
You can argue that OpenAI lacks focus, or that they waste energy on these experiments. That's a reasonable discussion. But calling it "the beginning of the end" because of one side project is just unserious. Tech companies at the frontier run hundreds of little prototypes like this... most get abandoned, and that's fine.
The real question about OpenAI's future has nothing to do with Sora. It's whether large language and multimodal models eventually become a zero-margin commodity. If that happens, OpenAI's valuation problem isn't about branding or app strategy, it's about economics. Can they build a moat beyond "we have the biggest model"? Because that won't hold once opensource and fine-tuned domain models catch up.
So sure, Sora might be a distraction. But pretending that a minor interface launch is some great unraveling of OpenAI's trajectory is just lazy narrative-hunting.
Their first bet was than they were going to be the frontier model provider by a good margin, and that others would not be able to compete on the "intelligence". And that they could get distribution via big customers looking to buy model access.
The dominant-model-provider strategy has already failed, many actors have models that rival them - both established (Google) and newcomers (Anthropic). Open models are not to shabby either, enough to undermine the narrative "we are uniquely able to do powerful models". As you say, there is a commodification process started, and it might be a race to the bottom in terms of margin.
So, OpenAI has moved into a new/adapted strategy, where they want to own the customers to a much larger degree, and rely less on partners/customers for distribution. This is likely because their prospective partners have a bunch of viable models to select between (many end products for power users lets people select freely), and high competitive pressure on costs (as it defines the margin and competitiveness) of the end products. Codex, Sora, their new web browser announcement, adjustments in ChatGPT is all to ensure a lot of direct end users - more brand recognition, more influence, more monetization possibilities.
So I think it is a considerable pivot from their initial plan/hopes. But it is not an unraveling - it is a rather smart response to the fierce competition in the market.
I agree. My bet is that OpenAI will not fullfill its mission of developing AGI by 2035. And I would be surprised if they ever did. As much as they might want to, there is only so many dreams you can whisper into rich people's ears before they tell you to go away. And without rich people's money, OpenAI will fall like a house of cards. The wealthy won't have infinite patience
It seems they are going to try to maximize their installed base, build the infrastructure, and try to own everything in between, whether it’s LLM or some other architecture that arises. Owning data centers and an installed base sounds great in theory, but it assumes you can outbuild hyperscalers on infrastructure and that your users will stick around. Data centers are a low margin grind and the installed base in AI isn’t locked in like iPhones. Apple and Google still control the endpoints, and I think they’ll ultimately decide who wins by what they integrate at the OS level.
There are also interesting things one could do with models like Sora, depending how it actually performs in practice: prompting to segment, for example; and the thing could very possibly, if it's fast enough etc. become a foundation for robotics.
OpenAI is making a wild number of product plays at once, trying to leverage the value of the frontier model, brand value, and massive number of eyeballs they own. Sora is just one of many. Some will fail and maybe some will succeed.
It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.
On some level they know that LLMs alone won't lead to AGI so they have to take a shotgun approach to diversify, and also because integrating some parts of all these paths is more likely to lead to the outcome they want than going all in on one.
Also because they have the funding to do it.
Reminds me a bit of the early Google days, Microsoft, Xerox, etc,
This is just what the teenage stage of the top tech startup/company in an important new category looks like.
The massive cost of this product is unique though (not even counting the copyright lawsuits/settlements coming). I can't think of any side projects that require this level of investment.
>It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.
It makes them look desperate though. Nothing like starting tons of services at once to show you have a vision
> It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them.
Anthropic has said that every model they've trained has been profitable. Just not profitable enough to pay to train the next model.
I bet that's true for OpenAI's LLMs too, or would be if they reduced their free tier limits.
> OpenAI is making a wild number of product plays at once
It's similar to the process of electrification. Every existing machine/process needed to be evaluated to see if electricity would improve it: dish washing, clothes drying, food mixing, etc.
OpenAI is not alone. Every one of their products has an (sometimes superior) equivalent from Google (e.g. Veo for Sora) and other competitors.
I could see Sora having a significant negative impact on short form video products like TikTok if they don’t quickly and accurately find a way to categorize its use. A steady stream of AI generated video content hurts the value prop of short form video in more than one way… It quickly desensitizes you and takes the surprise out that drives consumption of a lot of content. It also of course leaves you feeling like you can’t trust anything you see.
Do people on the dopamine drip really care how real their content is? Tons and tons of it is staged or modified anyways. I'm not sure there's anything Real™ on TikTok anyways.
I think a lot of them actually do. It's easy to see TikTok users as mindless consumers, but the more you consume the more you develop a taste for unique content. Over the past few years the content that seems to truly do well at a global scale very often has markers of authenticity. Once something becomes easy to produce it becomes commonplace and you become sick of it quickly.
Thought the same. The human-generated content is just as brainless as the AI-generated slop. People who watched the first will also watch the latter. This will not change a lot, I think.
I mean, this is basically already status quo for YouTube Shorts. Tons and tons of shorts are AI-voice over either AI video or stock video covering some pithy thing in no actual depth, just piggybacking off of trending topics. And TikTok has had the same sort of content for even longer.
The "value" of short video content is already somewhat of a poor value proposition for this and other reasons. It lets you just obliterate time which can be handy in certain situations, but it also ruins your attention span.
I got the feeling when this was released that it was just another metric to justify further investment, they were guaranteed to have a lot of users, they can turn around and say "well we have 2 huge applications and were just getting started" investors don't care too much about product quality we've seen, just large numbers.
> It’s unclear whether this app will last. One major issue is the back-end expense of producing these videos. For now, OpenAI requires a paid ChatGPT Plus account to generate your own content. At the $20 tier, you can pump out up to 50 low-resolution videos per month. For a whopping $200 a month, you can generate more videos at higher resolutions. None of this compares favorably to competitors like TikTok, which are exponentially cheaper to operate and can therefore not only remain truly free for all users, but actually pay their creators .
fwiw, there's no requirement to have a subscription to create content.
"It wasn't that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb, and many commentators took Dario Amodei at his word when he proclaimed* 50% of white collar jobs might soon be automated* by LLM-based tools.
A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn't be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn't be entertaining the idea, as Altman did last week, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy"
They might be forced to do so because the current inference pricing is not really covered by the 20$ monthly fee.
Who knows what they have promised to the investors and the real cashflow is hard to be certain about with the circular nature of cross-investing between the biggest players.
The fact that OpenAI is pushing Sora, and Altman now even hinting at introducing "erotic roleplay"[0] makes it obvious: openAI has stopped being a real AI research lab. Now, they’re just another desperate player in a no-moat market, scrambling to become the primary platform of this hype era and imprison users onto their platform, just like Microsoft and Facebook did before in the PC and social era.
The amount of animal abuse videos I've seen is a bit disturbing. It only demonstrates how careless they have been, possibly intentionally. I know people on HN have been describing the various reasons why OpenAI has not been a good player, but seeing it first-hand is visceral in a way that makes me concerned about them as a company.
But if you followed them, there are focusing only on product for the last 2 years. The grand GPT-5 and their scaling laws, from which all their LLM AGI hopes originated, turned out to be a dud.
It wasn’t that long ago that Sam Altman was still comparing the release of GPT-5 to the testing of the first atomic bomb , and many commentators took Dario Amodei at his word when he proclaimed 50% of white collar jobs might soon be automated by LLM-based tools."
That's the thing, this has all been predicated on the notion that AGI is next. That's what the money is chasing, why it's sucked in astronomical investments. It's cool, but that's not why Nvidia is a multi trillion dollar company. It's that value because it was promised to be the brainpower behind AGI.
In fact a fun thing to think about is what signals we could observe in markets that specifically call out AGI as the expectation as opposed to simple bullish outlook on inference usage.
AI is already integrated into every single Google search, as well as Slack, Notion, Teams, Microsoft Office, Google Docs, Zoom, Google Meet, Figma, Hubspot, Zendesk, Freshdesk, Intercom, Basecamp, Evernote, Dropbox, Salesforce, Canva, Photoshop, Airtable, Gmail, LinkedIn, Shopify, Asana, Trello, Monday.com, ClickUp, Miro, Confluence, Jira, GitHub, Linear, Docusign, Workday
.....so where is this 100X increase in inference demand going to come from?
Oh and the ChatGPT consumer app is seeing slowing growth: https://techcrunch.com/2025/10/17/chatgpts-mobile-app-is-see...
I view OpenAI like a pyramid scheme: taking in increasing amounts of money to pursuit ever growing promisses that can be dangled like a carrot to the next investor.
If you owe investors $100 million, that's your problem. If you owe investors $100 billion, that's their problem.
What we got next: porn
Porn (visual and written erotic impression) has been a normal part of the human experience for thousands of years. Across different religions, cultures, technological capabilities. We're humans.
There will always be a market for it, wherever there is a mismatch between desire for and access to sexual activity.
Generate your own porn is definitely a huge market. Sharing it with others, and then the follow-on concern of what's in that shared content, could lead to problems.
I had to work for a bit with SDXL models from there and the amount of porn on the site, before the recent cleanse, was astonishing.
The app is fun to use for about 10 minutes then that is it.
Same goes for Grok imagine. All people want to do is generate NSFW content.
What happened to improving the world?
I would love to have Bob Ross, wielding a crayon, add some happy little trees to the walls of a Target.
Deleted Comment
Whether OpenAI becomes a truly massive, world-defining company is an open question, but it's not going to be decided by Sora. Treating a research-facing video generator as if it's OpenAI's attempt at the next TikTok is just missing the forest for the trees. Sora isn't a product bet, it's a technology demo or a testbed for video and image modeling. They threw a basic interface on top so people could actually use it. If they shut that interface down tomorrow, it wouldn't change a thing about the underlying progress in generative modeling.
You can argue that OpenAI lacks focus, or that they waste energy on these experiments. That's a reasonable discussion. But calling it "the beginning of the end" because of one side project is just unserious. Tech companies at the frontier run hundreds of little prototypes like this... most get abandoned, and that's fine.
The real question about OpenAI's future has nothing to do with Sora. It's whether large language and multimodal models eventually become a zero-margin commodity. If that happens, OpenAI's valuation problem isn't about branding or app strategy, it's about economics. Can they build a moat beyond "we have the biggest model"? Because that won't hold once opensource and fine-tuned domain models catch up.
So sure, Sora might be a distraction. But pretending that a minor interface launch is some great unraveling of OpenAI's trajectory is just lazy narrative-hunting.
Deleted Comment
ChatGPT clearly is "for consumers". Whereas Sora is a kind of enshitification to monetize engagement. It's right to question the latter.
It seems true that no company has used frontier models to create a product with business value commensurate with the cost it takes to train and run them. That what OpenAI is trying to do with Sora, and with Codex, Apps, "Agent" flows, etc. I don't think there's more to read into it than that.
Also because they have the funding to do it.
Reminds me a bit of the early Google days, Microsoft, Xerox, etc,
This is just what the teenage stage of the top tech startup/company in an important new category looks like.
It makes them look desperate though. Nothing like starting tons of services at once to show you have a vision
Anthropic has said that every model they've trained has been profitable. Just not profitable enough to pay to train the next model.
I bet that's true for OpenAI's LLMs too, or would be if they reduced their free tier limits.
It's similar to the process of electrification. Every existing machine/process needed to be evaluated to see if electricity would improve it: dish washing, clothes drying, food mixing, etc.
OpenAI is not alone. Every one of their products has an (sometimes superior) equivalent from Google (e.g. Veo for Sora) and other competitors.
You always get the "who cares if it is fake" folks and even on reddit folks will point out something is AI and inevitably folks "who cares".
But I'm not sure how many people that is or what kind of content they care or don't care about.
The "value" of short video content is already somewhat of a poor value proposition for this and other reasons. It lets you just obliterate time which can be handy in certain situations, but it also ruins your attention span.
fwiw, there's no requirement to have a subscription to create content.
Dead Comment
A company that still believes that its technology was imminently going to run large swathes of the economy, and would be so powerful as to reconfigure our experience of the world as we know it, wouldn't be seeking to make a quick buck selling ads against deep fake videos of historical figures wrestling. They also wouldn't be entertaining the idea, as Altman did last week, that they might soon start offering an age-gated version of ChatGPT so that adults could enjoy"
[0] https://www.404media.co/openai-sam-altman-interview-chatgpt-...