State of AI in Business 2025 [pdf] - https://news.ycombinator.com/item?id=44941374 - August 2025
https://web.archive.org/web/20250818145714/https://nanda.med...
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.
https://venturebeat.com/ai/why-do-87-of-data-science-project...
- I strongly recommend Chip Huyen's books ("Designing Machine Learning Systems" and "AI Engineering") and blog (https://huyenchip.com/blog/).
- Andreessen Horowitz' "AI Cannon" is a good reference listicle (https://a16z.com/ai-canon/)
- "12 factor agents" (https://github.com/humanlayer/12-factor-agents)
Deleted Comment
Curious if you have any recommendations there.
I trained a bunch of simple linear regressions - while Omniscience Accuracy had the best fit (R2: 0.98), it predicted absurd multi‑trillion param sizes (Gemini 3 Pro ~1,254T total parameters). Artificial Analysis' Intelligence Index provided more plausible results:
Gemini 3 Pro: 3.4T Claude 4.5 Sonnet: 1.4T Claude 4.5 Opus: 4.1T GPT-5.x series in 2.9-5.3T range total parameters.
Interesting notes:
- task benchmarks (Tau²/GDPVal) aren't predictive of model size - adding price made the fit worse - sparsity or parameter activation ratios did not influence predicted sizes at all.