Readit News logoReadit News
karel-3d · 2 months ago
I don't like Ed Zitron automatic dismissal of everything AI and the constant profanities in his writing are getting old, and it's usually not very well structured, but that said... I like the perspective he has about the money involved.

https://www.wheresyoured.at/the-case-against-generative-ai/

OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity. If you take all private equity together, it still won't be enough for OpenAI in next four years.

It's just staggering amount of money that the world needs to bet on OpenAI.

OtherShrezzing · 2 months ago
I like reading Zitron’s output, but this remark stood out to me as him weighing in on a domain he’s basically clueless about.

Spread across 4 years, there’s way more than $1tn in available private capital. Berkshire Hathaway alone has 1/3 of that sitting in a cash pile right now. You don’t have to look at many balance sheets to see that the big tech companies are all also sitting on huge piles of cash.

I don’t personally believe OAI can raise that money. But the money exists.

ACCount37 · 2 months ago
Isn't that every domain?

The dude is a living embodiment of "overconfident and wrong". He picks a side, then cherry picks until he has it all lined up so that his side is Obviously Right and anything else is Obviously Wrong.

Then reality happens and he's proven wrong, but he never fucking learns.

johnnienaked · 2 months ago
Berkshires capital is not available for investment in openai
eboynyc32 · 2 months ago
I’ll take that bet.
gooodvibes · 2 months ago
> OpenAI needs 1 trillion dollars in next four years just to keep existing.

More like $115 billion based on your own link. The $1T is a guess based on promises made, far from "needs to spend that much just to exist".

Spending $30 billion a year doesn't sound that apocalyptic.

KaiserPro · 2 months ago
> Spending $30 billion a year doesn't sound that apocalyptic.

they have $1000billion in future liabilities. https://www.ft.com/content/5f6f78af-aed9-43a5-8e31-2df7851ce...

they have income of about 4billion a half, with a loss of $13bn.

Rebuff5007 · 2 months ago
OpenAI WANTS 1 trillion dollars.

They do not need it. Arguably no one needs it. I am at best luke-warm on LLMs, but how are any sane people rationalizing these requests? What is the opportunity cost of spending a billion or even a hundred billion dollars on compute instead of on climate tech or food security or healthcare or literally anything else?!

AtlasBarfed · 2 months ago
Well, it's the rich's money, dedicated to replacing as many people as possible in the Great war on labor.

I easily see the rich betting a trillion dollars especially if it's not their money and they start employing government funds in the name of a fictitious AI arms race.

They smell blood in the water. Reducing everyone to as minimum wage as possible.

Capital is already concentrated, aligned along monopolies and cartels, oligarchical control, and AI is the final key to total control to whatever degree they desire.

yibg · 2 months ago
I think it's pretty smart actually.

Altman has sort of linked the fate of OpenAI to that of Nvidia, AMD, Oracle, Microsft etc with these huge deals / partnerships. We've seen the impact of these deals on stock prices before even a penny has changed hands.

Tracks with his reputation for power play and politics.

bigbuppo · 2 months ago
That certainly explains why Microsoft is so desperate to force everyone into using their AI whether they want to or not. I'm wondering if the deal will end Microsoft when OpenAI goes belly up.
rf15 · 2 months ago
Yes, it's wonderful. I'm young enough to watch them drag each other down in the future, including bailout attempts.
dragonwriter · 2 months ago
> OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity.

Recent estimates I've seen of uninvested private equity capital are ~$1.5 trillion and total private equity $5+ trillion, with several hundred million in new pricate equity funds raised annually, so this simply seems incorrect even assuming either only currrent “dry powder" is considered, or only new funds available over the next four years, much less both and/or the rest of private equity.

nl · 2 months ago
> OpenAI needs 1 trillion dollars in next four years just to keep existing.

That's a mischaracterization: They need that order of investment to meet the demand they forecast.

It's unclear what other trajectories look like.

Additionally, I don't know who Ed Zitron is but he clearly doesn't follow how infrastructure projects are funded and how OpenAI is doing deals.

See for example the AMD deal last week where they seem to have at least partially used their ability to increase AMD's stock price to pay for future GPUs.

Mining companies do the kids of "circular deals" that OpenAI is criticized for all the time - they will take equity in their supplier companies. It's easy to see similar arrangements for this $1T investment in the future.

card_zero · 2 months ago
I thought the kind of circular financing that OpenAI is criticized for is lending to or investing in its customers.
karel-3d · 2 months ago
ok. maybe I misjudged his writing on that level too and it's also not very informed either.
lkrubner · 2 months ago
In late 2021, Ed Zitron wrote (on Twitter) that the future of all work was "work from home" and that no one would ever work in an office again. I responded:

"In the past, most companies have had processes geared towards office work. Covid-19 has forced these companies to re-gear their processes to handle external workers. Now that the companies have invested in these changed processes, they are finding it easier to outsource work to Brazil or India. Here in New York City, I am seeing an uptick in outsourcing. The work that remains in the USA will likely continue to be office-based because the work that can be done 100% remotely will likely go over seas."

He responded:

"Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."

I don't know if he was taking drugs or what. I find his persona on Twitter to be baffling.

thereitgoes456 · 2 months ago
He was wryly communicating, "your argument was so stupid I don't even need to engage with it".

In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.

In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.

karel-3d · 2 months ago
I generally don't agree with him on much; it's just nobody really talks about how much money those companies burn, and are expected to burn, in bigger perspective.

For me 10 billion, 100 billion and 1 trillion are all very abstract numbers - until you show much unreal 1 trillion is.

Andrex · 2 months ago
> "Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."

Attach your name to this publicly, and you're a clown. I don't know why the world started listening to clowns and taking them seriously, when their personas are crafted to be non-serious on purpose.

Like I said, clowns.

aiiizzz · 2 months ago
The openai bet goes "within 4 years, we need to be able to tell the AI to make money for us." Are they on track?
rcxdude · 2 months ago
There's another important bit there: 'we need to be able to tell the AI to make money for us and no-one else can compete with us on that'. I think both halves of that are questionable.
sixtyj · 2 months ago
Not sure.

https://youtu.be/OYlQyPo-L4g

From 7th minute, truth about ChatGPT - 2% paid users.

AgentME · 2 months ago
Hasn't he been saying that OpenAI is going to shut down every year for the last few years now? And that models are never going to get better than they were in 2022? I think he's pretty clearly a grifter chasing an audience that wants to be reassured that big tech is going away soon.
jaredcwhite · 2 months ago
Ed Zitron may be many things but he is no grifter. He writes what he believes and believes what he says, and I basically agree with all of it. The chattering class in SV has been wildly wrong for years, and they'll really look foolish when the market crashes horribly and these companies collapse.
moistoreos · 2 months ago
You could literally eliminate homelessness, world hunger, and fully pay health care costs in that same timeframe with much less cash.
exolymph · 2 months ago
Homelessness and world hunger are not bottlenecked by money. There's a stronger case for healthcare, but that's also substantially a political issue.
codyb · 2 months ago
Fully pay health care costs for _who_ for four years with 1 trillion dollars?

Doesn't America alone already spend 2 or 3 trillion a year on healthcare?

wilg · 2 months ago
If that's true, all you have to do is convince other people that it's true and they can just vote for someone to deploy that money. Don't need to wait for someone else to do it.
vbezhenar · 2 months ago
Cash is just paper. You can't eat paper.
mattgreenrocks · 2 months ago
All that money and they’ve been trying to shy away from talk of AGI recently to lower expectations.
impossiblefork · 2 months ago
The AI is getting better at things like maths. I recently asked it about iterating the Burrows-Wheeler transform, and it appeared to really understand that. It's not super easy to reason about why it's reversible, etc. and I felt that it got it.

This is obviously not AGI, and we're very far from AGI as we can see by trying out these LLMs on things like stories or on analyzing text, or dealing with opposing arguments, but for programming and maths performance is at a level where it's actually useful.

CuriouslyC · 2 months ago
A lot of this is because there isn't a good definition of AGI. Look at Sama's recent interviews, that's how he deflects, along with the statement about the Turing test having ultimately been inconsequential. They have an internal definition of AGI that is "the model can perform the vast majority of economically viable tasks at the level of the highest humans" which isn't the story the investors are expecting when they hear AGI, so they're trying to stay mum to truthfully roadmap AGI while not blatantly lying to capital.

Deleted Comment

kllrnohj · 2 months ago
Platforms usually deliver significant value that is hard to replicate. OpenAI doesn't have any such thing. It's trivially replaced, and there's many competitors already. OpenAI is ahead of the curve, but they don't seem to have any particular way to do sticky capture. Migrating to a different LLM is an afternoon's work at most, not nearly the complexity of porting an app between OS' or creating a robust hardware driver model.
aerhardt · 2 months ago
Yeah, I'm extremely loyal to ChatGPT Plus and Codex, but is't because OpenAI has a native Mac app that I like, and Codex included with Plus and has served me well enough to not look at Claude. I like GPT-5 quite a bit as a user. I'll concede none of these are small things - they've had my money for 2+ years - but they're not gigantic advantages either.

At an enterprise level however, in the current workload I am dealing with, I can't get GPT-5 with high thinking to yield acceptable results; Gemini 2.5 Pro is crushing it in my tests.

Things are changing fast, and OpenAI seems to be the most dynamic player in terms of productization, but I'm still failing to see the moat.

moomoo11 · 2 months ago
I think their moat is ironically the platform.

Like Apple vs every other computer maker.

Their platform mostly just works. Their api playground and docs are like 100x better than whatever garbage Anthropic has.

I think their UX is way better, and I can have long AF conversations without lagging. I can even change models in the same conversation. Basic shit Anthropic can’t figure out (they can fleece their 20x max subscribers tho)

I think if they get the AI human interface right, they will have their iPhone moment and 10x.

airstrike · 2 months ago
OpenAI is dynamic for consumer apps. Anthropic seems much better at productizing AI that you can actually build with, while also catering to enterprise in their own productized offerings

Also Claude Opus 4.1 runs multidimensional circles around GPT-5 in my view. The only better use case for GPT-5 is when you need it to scrape the web for data

int_19h · 2 months ago
Interesting, so it's not just me who's finding Gemini 2.5 Pro to be the quiet leader? Deep Research also seems to be better in it, and the limits for that are far more generous to boot (20 per day on Gemini!).

Makes one wonder if Google will eventually sweep this field.

Insanity · 2 months ago
I don’t think they will get any moat. I might be wrong on this of course, but I don’t see a killer feature for these stochastic parrots that can’t be easily replicated
auggierose · 2 months ago
Actually, Anthrophic has a Mac app I can use, and I cannot use the OpenAI one.
pants2 · 2 months ago
Much of the value I get from ChatGPT is based on it's memory system that has a good profile on me. For example if I ask it to find me something to eat at X restaurant, it will already know my allergies, dietary preferences, weight goals, medications, other foods I like, etc and suggest an item based on that, all workout me explicitly telling it.

Moving from ChatGPT to Claude I would lose a lot of this valuable history.

cool_dude85 · 2 months ago
Or you could type up "My allergies are X, Y, Z, I have the following preferences..." etc. and put it into whatever chat bot you like. Obviously this is a bit of a pain, but it probably constrains significantly how much of a premium ChatGPT can charge. You might not bother if ChatGPT is time and a half more expensive, but what if it's 3x as much as the competition? What if there's a free alternative that's just as good?
mattmanser · 2 months ago
Still not much of a moat, especially with data portability.

In the EU/UK you might not have rights to the memories right now, but you've rights to the inputs that created those memories in the first place.

Wouldn't be too hard to export your chat history into a different AI automatically.

scottyah · 2 months ago
Have you tried asking ChatGPT to output everything it knows about you to a format easily digestible by LLMs? Does the memory stick between model switches?
Fuzzwah · 2 months ago
I like to ask each llm what it knows about me. I feel like I could take that output and feed it into another llm and the new one would be up to speed quickly....
cmrdporcupine · 2 months ago
You could literally ask it to write out everything it knows about you into a form usable in a CLAUDE.md file, put that in the directory you're using e.g. claude code in, and boom.

No moat.

Deleted Comment

threetonesun · 2 months ago
The play OpenAI is making has nothing to do with the underlying models any more. They release good ones but it doesn't really matter, they're going for being the place people land when they open a web browser. That is incredibly sticky and not easily replaced. Being the company that replaces the phrase "oh just Google it" is worth half the money in the world, and I think they're well on their way to getting there.
kllrnohj · 2 months ago
But that also makes them the product itself and requires standalone profitability which is not at all like the Windows' analogy being presented.
scottyah · 2 months ago
I do say ChatGPT when referring to LLMs/genAI in general, but I do hate saying it as it is nowhere near as nice to say as "google". I will switch immediately once something better comes up.
Mistletoe · 2 months ago
I actually prefer Google Gemini. 2.5 is free and works awesome for what I need AI for. It just made my resume I uploaded immeasurably better last night.

https://www.reddit.com/r/Bard/comments/1mkj4zi/chatgpt_pro_g...

yubblegum · 2 months ago
> Migrating to a different LLM is an afternoon's work at most, not nearly the complexity of porting an app between OS' or creating a robust hardware driver model.

I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say Ora to Ingress) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.

mgh95 · 2 months ago
> I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say oracle to postgres) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.

The bigger problem is that there was never a way to move data between oracle->postgres in pure data form (i.e. point pgsql at your oracle folder and it "just works"). Migration is always a pain, and thus there is a substantial degree of stickiness, due to the cost of moving databases both in terms of risk and effort.

In contrast, vendors [1] are literally offering third party LLMS (such as claude) in addition to their own and offering one-click switching. This means users can try and if they desire switch with little friction.

[1] https://blog.jetbrains.com/ai/2025/09/introducing-claude-age...

kllrnohj · 2 months ago
> The minutia is sticky with these sort of almost-but-not 'universal' interfaces.

True, but that's not really applicable here since LLMs themselves are not stable, and are certainly not stable within a vendors own product line. Like imagine if every time Oracle shipped a new version it was significantly behaviorally inconsistent with the previous one. Upgrading within a vendor and switching vendors ends up being the same task. So you quickly solidify on either

1) never upgrading, although with these being cloud services that's not necessarily feasible, and since LLMs are far from a local maxima in quality that'd quickly leave your stack obsolete

or

2) being forced to be robust, which makes it easy to migrate to other vendors

impossiblefork · 2 months ago
It's reasonable to question it, but there's a fun Chinese paper (https://arxiv.org/abs/2507.15855) where they attempt to create a system of prompts that can be used with commercial LLMs to solve hard maths problems.

It turns out that they can use the same prompt system for all of them, with no changes and still solve 5/6 IMO problems. I think this is possibly iffy, since people might have updated the models etc., but it's pretty obvious that this kind of thing is how OpenAI are doing their multi-stage thinking thing for maths internally.

Consequently if prompt systems are this transferable for these hard problems, why wouldn't both they and individual prompts, be highly transferable in general?

psychoslave · 2 months ago
No, changing rdbms is a totally different challenge. They provide predictible reproducible output which are very sensitive and brittle to the slightest minor change in the input, while with LLM you expect the exact opposite.

Changing the LLM backend in some IDE is as complicated as selecting an option in a dropbox for those who integrate such a feature. They are other scenarios where it might be a bit more complicated to transition of course, but that's it.

iamflimflam1 · 2 months ago
If you are doing things “properly” then you have good evals that let you test the behaviour of different LLMs and see if they work for your problem.

The vendors have all standardised on OpenAIs API surface - you can use OpenAIs SDK with a number of providers - so switching is very easy. There are also quite a few services that offer this as a service.

The real test is does a different LLM work - hence the need to evals to check.

jgalt212 · 2 months ago
If you took any of current top 3 models from me, I would not miss the deleted one in the least. I run almost every non-trivial prompt through multiple models anyway.
BeetleB · 2 months ago
> Each vendor's offering has its own peculiar prompt quirks, does it not?

That applies even when you switch models within a vendor, though.

yen223 · 2 months ago
A lot of the points also apply to Google as a search engine, yet we're still seeing Google owning about 90% of search engine usage.
BeetleB · 2 months ago
Google search requires a lot of resources to crawl, keep indices up to date, and provide user level significance (i.e. different results to different users). Then couple it with their other services (Google Maps, etc).

The competitors have not come even close to Google's level of quality.

With LLMs, it's different. Gemini/Claude are as good, for the most part. And users don't care that much either - most use the standard free ChatGPT, which likely is worse than many competitors' paid models.

Andrex · 2 months ago
Google was always terrified by the fact that they had no moat in Search. This was clear from interviews and articles at the time. That's why they decided to roll up the ad market instead, and once they had the advertisers Search became a self-fulfilling monopoly.

Google Search would be moatless if not for the AdMob purchase.

kllrnohj · 2 months ago
Google is a product not a platform. How many companies are still using Google search for their intranets? (yes yes I'm old, don't remind me)

Meanwhile bing search is actually a platform, and is what then powers other "search engines" (duckduckgo, kagi, etc...)

jorblumesea · 2 months ago
they understand that, and that's why they're making it sticky by adding in app purchasing, advertising, integrations. also why they hired OGs from IG/FB. They are building the moat and hoping that first to market is going to work out.
dantyti · 2 months ago
I do not believe that advertising and purchasing is at the top of the list of things what make software sticky

Deleted Comment

wahnfrieden · 2 months ago
That's why they're working on consumer hardware
an0malous · 2 months ago
And business partnerships, government partnerships, and AI regulation (to establish laws that keep competitors out). Sam knows they have no moat and will try every avenue to establish one.

As Peter Thiel says: “competition is for losers”

djmips · 2 months ago
Was windows hard to replicate? Not really.
DebtDeflation · 2 months ago
That's why OpenAI is hard pivoting towards products now. The big one IMO is Instant Checkout and Agentic Commerce Protocol. ChatGPT is going to turn into a product recommendation engine and OpenAI is going to get a slice of every purchase, which is going to disrupt the current impression/click adtech model and potentially Google and Amazon themselves. It's an open question how hard they can do this without enshittifying ChatGPT itself, but we'll see.
int_19h · 2 months ago
The notion that people would be willing to let LLMs spend money is, frankly, insane given the hallucination problems that still don't have any clear solution in sight.
ryukoposting · 2 months ago
OpenAI's moat is the data you give it. It's the same reason so many people have GMail accounts, even though we all know Google sucks. It's not that you like GMail better than any other email service, it's because migrating to another service is a pain in the ass.

OpenAI will either use customer data to enshittify to a level never seen before, or they will go insolvent.

llmslave · 2 months ago
They only invented modern AI, can easily be replaced!!
kllrnohj · 2 months ago
They did nothing of the sort. Google invented modern AI: https://en.wikipedia.org/wiki/Attention_Is_All_You_Need

OpenAI turned that research into a product before Google, which is a huge failure on Google's part, but that's orthogonal to the invention of what powers modern models.

barrenko · 2 months ago
You could say the same thing about Facebook...
chmod775 · 2 months ago
You mean the social network that is currently dying the same death as countless other platforms before it, just on a larger scale?

Maybe some are too young to remember the great migrations from/to MySpace, MSN, ICQ, AIM, Skype, local alternatives like StudiVZ, ..., where people used to keep in contact with friends. Facebook was just the latest and largest platform where people kept in touch and expressed themselves in some way. People adding each other on Facebook before others to keep in touch hasn't been a thing for 5 years. It's Instagram, Discord, and WhatsApp nowadays depending on your social circle (two of which Meta wisely bought because they saw the writing on the wall).

If I open Facebook nowadays, then out of ~130 people I used to keep in touch with through that platform, pretty much nobody is still doing anything on there. The only sign of life will be some people showing as online because they've the facebook app installed to use direct messaging.

No, people easily migrate between these platforms. All it takes is put your new handles (discord ID/phone number/etc) as a sticky so people know where to find you. And especially children will always use whichever platform their parents don't.

Small caveat: This is a German perspective. I don't doubt there's some countries where Facebook is still doing well.

yc349833874923 · 2 months ago
People say you're wrong but I agree. Facebook is nothing more than a rent-seeking middle man between you and your friends/family. Instead of just talking to your family normally, like over the phone, now you have to talk to them in between mountains of Sponsored Content and AI-generated propaganda. It provides no productive value to the world except for making the world more annoying and making people more isolated from one another.

When you realize this, you realize that a lot of other supposedly valuable tech companies operate in the exact same way. Worrying that our parents' retirement depends heavily on their valuations!

warkdarrior · 2 months ago
FB has network effects (your friends/neighbors/etc are there). When I use ChatGPT, I don't care whether anyone else uses it as well.
wahnfrieden · 2 months ago
Not at all, because it is not easy to replicate and move its social graph
pixl97 · 2 months ago
I mean, technically no, FB has a network effect of the other people on it either being the people you want to talk to, or the people you want to advertise to.
mallowdram · 2 months ago
That's a precise, incisive observation: OpenAI is trivial (any AI provider is), supported by evidence as demonstrated. It has no claim to operating software that's specifically distinct from others.
Multiplayer · 2 months ago
The current AI wave has been compared (by sama) to electricity and sometimes transistors. AI is just going to be in all the products. The trillion dollar question is: Do you care what kind of electricity you are using? So, will you care what kind of AI you are using.

In the last few interviews with him I have listened to he has said that what he wants is "your ai" that knows you, everywhere that you are. So his game is "Switching Costs" based on your own data. So he's making a device, etc etc.

Switching costs are a terrific moat in many circumstances and requires a 10x product (or whatever) to get you to cross over. Claude Code was easily a 5x product for me, but I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.

I do not think that apps inside chatgpt matters to me at all and I think it will go the way of all the other "super app" ambitions openai has.

Fraterkes · 2 months ago
If you take that at face value, shouldn't every investor just back Google or Apple instead? Like, OpenAI is, at best, months ahead when it comes to model quality. But for them to get integrated into the lives of people in the way all their competitors are would take years. If the way in which ai becomes this ubiquitous trillion dollar thing involves making it hyper-personalized, is there any way in which OpenAi is particularly well positioned to achieve that?
int_19h · 2 months ago
They haven't been ahead in model quality for some time now.
visarga · 2 months ago
> I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.

Today I asked GPT5 to extract a transcript of all my messages in the conversation and it hallucinated messages from a previous conversation, maybe leaked through the memory system. It cannot tell the difference. Indiscriminate learning and use of memory system is a risk.

anthonypasq · 2 months ago
I mean don't you think this is is more analogous to the introduction of computing than electricity? If you told people in 1960 that there would be supercomputers inside people's refrigerators do you think they would have believed you?

And most people actually don't care what CPU they have in their laptop (enthusiasts still do which i think continues to match the analogy), they care more about the OS (chatGPT app vs gemini etc).

random9749832 · 2 months ago
>The current AI wave has been compared (by sama) to electricity and sometimes transistors. AI is just going to be in all the products.

Sorry, but you have to be beyond thick to believe any of this.

polalavik · 2 months ago
Ya absolutely wild take.

can the world and tech survive fruitfully without AI? yes. can the world and tech survive without electricity and transistors - not really. the modern world would come crashing down if transistors and electricity disappeared overnight. if AI disappeared over night the world might just be a better place.

yalogin · 2 months ago
I am not sure about this. They definitely created a brand new service and data flows that didn’t exist before and they have the majority of the mind share, however it’s already commoditized. The next two to three years will show how the chips fall. I can see that it’s tough or almost impossible for apple to get a share in this but google is right there to take the consumer side. For enterprise again we have to wait and see how gcp and AWS do.

The value is not in the llm but vertical integration and providing value. OpenAI has identified this and is doing is vertical integration in a hurry. If the revenue sustains it will be because of that. For consumer space again, nvidia is better positioned with their chips and SoCs but OpenAI is not a sure thing yet. By that I don’t mean they are going to fall apart, they will continue to make a large amount fmloney but whether it’s their world or not is still up in the air.

ryukoposting · 2 months ago
> The value is not in the llm but vertical integration and providing value.

The irony being that LLMs are particularly good at writing the web frontend code, lowering the technical barrier to entry for competitors.

sergiotapia · 2 months ago
OpenAI doesn't have a moat unfortunately. One URL replacement away and you can switch most models in minutes. I have personally done this many times over the last year and a half.

It only takes labs to produce better and better models, and the race to bottom on token costs.

sumedh · 2 months ago
> OpenAI doesn't have a moat unfortunately.

The moat is the branding, for most people AI means ChatGpt.

alonmower · 2 months ago
you could say the same thing about a search engine. if they keep an edge on brand or quality they can find ways of monetizing that attention
nemomarx · 2 months ago
Honestly how google maintains a "moat" over other search engines could use a good business study. They've defied some pretty serious competitors there without an obvious lock in or anything.

(You can say default in various browsers and a phone OS and that's probably the main component but it's not clear changing that default would let Bing win or etc.)

thiago_fm · 2 months ago
The victory lap from Sam Altman and all the money being raised makes people forget the following:

- Open source LLM models at most 12 months behind ChatGPT/Gemini; - Gemini from Google is just as good, also much cheaper. For both Google and the users, as they make their own TPU; - Coding. OpenAI has nothing like Sonnet 4.5

They look like they invested billions to do research for competitors, which have already taken most of their lunch.

Now with the Sora 2 App, they are just burning more and more cash, so people watch those generated videos in Tiktok and Youtube.

I find it hilarious all the big talk. I hope I get proven wrong, but they seem to be getting wrecked by competitors.

charles_f · 2 months ago
I don't understand the link between the title of the article, and its content. If I summarize their three points:

1. Corporate strategy of OpenAI is becoming a monopole 2. OpenAI is investing in infrastructure because they think they'll have lots of users in the future 3. Making videos on Sora is fun, and people are gonna post more of these.

How does that substantiate "we live in OpenAI's world"? Am I missing something?

irl_zebra · 2 months ago
I'm on the verge of unsubscribing from Stratechery. The last month has been a bunch of fawning over Meta, YouTube, and constant talk about and fawning over OpenAI and whatever latest models are coming out. It's kind of tiring and boring. I swear I heard them talk about some YouTube influencers event like five times across their different shows and across time. Like, I do not care at all.
mooglevich · 2 months ago
As a longtime loyal subscriber to Stratechery... I kinda agree. But as the other commenters did point out, this does reflect how the market seems to feel about OpenAI, at least. (Meta - I'm less sure of; Thompson does fawn over Meta quite a bit, I personally think it's too much and seems to not fully reflect reality, but man do they really cane it when you see their usage numbers, so maybe he's right.)

I did think his GPT-5 commentary was good, insofar as picking up the nuance of why it's actually better than the immediate reactions I, at least, saw in the headlines.

Where I do agree with you is how Stratechery's getting a little oversaturated. I'm happy Ben Thompson is building a mini media empire, but I might have liked it more when it was just a simple newsletter that I got in my inbox, rather than pods, YouTube videos, and expanding to include other tech/news doyens. Maybe I'm just a tech media hipster lol.

roxolotl · 2 months ago
Is this fawning or just reflecting reality? I’m generally in the “LLMs kinda suck camp” and I read the headline and thought “yep 100%”. OpenAI is able to raise and deploy insane amounts of capital on a whim. Regardless of that being a good or bad thing it’s still true.
Multiplayer · 2 months ago
Did you listen to the recent interview with Ben Bajarin? I thought that interview alone justified the subscription. Curious as to whether anyone else felt the same.
graeme · 2 months ago
Fantastic interview. Hard to get much info from inside the world Bajarin was speaking of. Notable how everyone is saying they can't get capacity for the tokens they're trying to serve.