Whether or not he's right, Zitron just keeps repeating the same points over and over again at greater and greater length. This newsletter us 18,500 words long (with no sections or organization), and none of it is new.
As someone who generally agrees with the thesis, I still find the length of the article quite frustrating, since it's definitely quantity over quality for text here.
The core issue is that OpenAI is committing to spending hundreds of billions on AI data center expansion that it doesn't have and that it doesn't appear able to acquire, and this basic fact is being obscured by circular money flows and the finances of AI being extremely murky [1]. But Zitron is muddying this message by excessive details in trying to provide receipts, and burying all of it behind what seems to be a more general "AI doesn't work" argument that he seems to want to make but isn't sufficiently well-equipped to make.
[1] The fact that the Oracle and Nvidia deals with OpenAI may actually be the same thing is the one thing new to me in this article.
It's all so regurgitated and unoriginal, even between him and other anti-AI critics, that it truly feels like he does. Not that AI hypers are better — but those with more nuanced middle of the road complex views are the ones that are (such as Simon Willams' work on The Lethal Trifecta, the "AI Coding Trap" article recently, etc), and I think that's interesting. I also feel like he really cherry picks his statistics (both model performance and economics wise) as much as his enemies do, so it can be exhausting to read.
Every time one of Zitron's posts come up I think of bitcoin or algorithmic social media feeds. Like those things, I understand people have strong opinions on whether or not it's good or bad for society.
But what's the endgame? Is it to persuade people not to use these things? Make them illegal? Create some other technology that makes them obsolete or non-functional?
Ed is insufferable. And for the most part, he is right. LLM’s are propping up the economy, but as a technology these models are not transformative but iterative. At the rate of investment, unless we reach AGI in the next 24 months, then the ROI will not pay off. I don’t know what I’m supposed to do if Ed is right. Maybe I need to move my retirement accounts out of index funds and into cash. But for now, it does seem the market is in a bit of collective psychosis. Sigh.
The core issue is that OpenAI is committing to spending hundreds of billions on AI data center expansion that it doesn't have and that it doesn't appear able to acquire, and this basic fact is being obscured by circular money flows and the finances of AI being extremely murky [1]. But Zitron is muddying this message by excessive details in trying to provide receipts, and burying all of it behind what seems to be a more general "AI doesn't work" argument that he seems to want to make but isn't sufficiently well-equipped to make.
[1] The fact that the Oracle and Nvidia deals with OpenAI may actually be the same thing is the one thing new to me in this article.
But what's the endgame? Is it to persuade people not to use these things? Make them illegal? Create some other technology that makes them obsolete or non-functional?