Most of the high volume enterprise use cases use their cloud providers (e.g., azure)
What we have here is mostly from smaller players. Good data but obviously a subset of the inference universe.
With better imaging, tooling, and archaeological funding, I'm sure we'll find much more evidence like this
So many countries bronze and ancient ages are underexplored
- Extremely personal data on users
- Novel way of introducing and learning more about sponsored products
- Strong branding for non-techie people (most normal people don't know what Claude or Gemini are)
- An app that is getting more and more addictive/indispensable
I think OpenAI is going to kill it in ads eventually. This is why Meta and Google went all in on AI. Their lucrative digital ad business is in an existential threat.
I think people who kept saying there is no moat in AI is about to be shocked at how strong of a moat there actually is for ChatGPT.
All free LLM chat apps will need to support ads or they will eventually die due to worse unit economics or run out of funding.
PS. Sam just said OpenAI's revenue will finish at $20b this year. 6x growth from 2024. Zero revenue from non-sub users. What do you guys think their revenue will end up in 2026?
Getting $200 subscriptions from a small number of whales, $20 subscriptions from the average white-collar worker, and then supporting everything us through advertising seems like a solid revenue strategy
Anthropomorphism of LLMs is obviously flawed but remains the best way to actually build good Agents.
I do think this is one thing that will hold enterprise adoption back: can you really trust systems like these in production where the best control you can offer is that you're pleading with it to not do something?
Of course good engineering will build deterministic verification and scaffolds into prevent issues but it is a fundamental limitation of LLMs
The more prevalent automation is, the worse humans do when that automation is taken away. This will be true for learning now .
Ultimately the education system is stuck in a bind. Companies want AI-native workers, students want to work with AI, parents want their kids to be employable. Even if the system wants to ensure that students are taught how to learn and not just a specific curriculum, their stakeholders have to be on board.
I think we're shifting to a world where not only will elite status markers like working at places like McKinsey and Google be more valuable but also interview processes will be significantly lengthened because companies will be doing assessments themselves and not trusting credentials from an education system that's suffering from great inflation and automation