Reality is hilarious.
Reality is hilarious.
Unfortunately the AI bubble seems to be predicated on just improving LLMs and really really hoping that they'll magically turn into even weakly general AIs (or even AGIs like the worst Kool-aid drinkers claim they will), so everybody is throwing absolutely bonkers amounts of money at incremental improvements to existing architectures, instead of doing the hard thing and trying to come up with better architectures.
I doubt static networks like LLMs (or practically all other neural networks that are currently in use) will ever be candidates for general AI. All they can do is react to external input, they don't have any sort of an "inner life" outside of that, ie. the network isn't active except when you throw input at it. They literally can't even learn, and (re)training them takes ridiculous amounts of money and compute.
I'd wager that for producing an actual AGI, spiking neural networks or something similar to them would be what you'd want to lean in to, maybe with some kind of neuroplasticity-like mechanism. Spiking networks already exist and they can do some pretty cool stuff, but nowhere near what LLMs can do right now (even if they do do it kinda badly). Currently they're harder to train than more traditional static NNs because they're not differentiable so you can't do backpropagation, and they're still relatively new so there's a lot of open questions about eg. the uses and benefits of different neural models and such.
For the case of Reddit, a silo maps nicely onto a subreddit. Within any subreddit the moderator can have full control, they can moderate it exactly as they choose. If you don't like it, create your own where you will have free rein.
Well, not that they don't do stupid things all the time, but having credits live on a system with a weak consistency model would be silly.
Equities are forward looking. TSMC's valuation doesn't make sense if it doesn't have a backlog to grow into.
However, that was never very many people. Only the smart ones. Many would prefer to have shouted into the void at reddit/stackoverflow/quora/yahoo answers/forums/irc/whatever, to seek an "easy" answer that is probably not entirely correct if you bothered going right to the source of truth.
That represents a ton of money controlling that pipeline and selling expensive monthly subscriptions to people to use it. Even better if you can shoehorn yourself into the workplace, and get work to pay for it at a premium per user. Get people to come to rely on it and have no clue how to deal with anything without it.
It doesn't matter if it's any good. That isn't even the point. It just has to be the first thing people reach for and therefore available to every consumer and worker, a mandatory subscription most people now feel obliged to pay for.
This is why these companies are worth billions. Not for the utility, but from the money to be made off of the people who don't know any better.
Apropos to that, I wonder if OpenAI et al are losing money on API plans too, or if it's just the subscriptions.
Source for the OpenAI loss figure: https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Source for OpenAI losing money on their $200/mo sub: https://fortune.com/2025/01/07/sam-altman-openai-chatgpt-pro...