My point is why are your economic motivations valid while his aren’t?
How could a practical LLM enthusiast make a non-economic argument in favor of their use? They’re opaque usually secretive jumbles of linear algebra, how could you make a reasonable non-economic argument about something you don’t, and perhaps can’t, reason about?
> War in Ukraine will tank the stock market
> High interest rates will tank the stock market
> Tariffs will tank the stock market
> IA will tank the stock market <- We are here
All those statements made sense to me at the time. And I have no doubt that one of these days, someone will make a correct prediction. But who the hell know what and when.
Diversify, be reasonable and be prepared for it to happen someday. But freaking out with any new prediction of doom is not the winning strategy.
All of the events you listed have had significant economic effects and required massive intervention from the state to buoy asset prices. The longer this continues the more our economy becomes geared to producing "value" for this small, and shrinking, group of owners at the expense of everyone else.
[1] https://techcrunch.com/2025/06/13/scale-ai-confirms-signific...
Their Wikipedia history section lists accomplishments that align closely with DoD's vision for GenAI. The current admin, and the western political elite generally, are anxious about GenAI developments and social unrest, the pairing of Meta and Scale addresses their anxieties directly.
Frankly the noise being made online about AI boils down to social posturing in nearly all cases. Even the author is striking a pose of a nuanced intellectual, but this pose, like the ones he opposes, will have no impact on events.
Unfortunately, it's just the opposite. It seems most people have fully assimilated the idea that information itself must be entirely subsumed into an oppressive, proprietary, commercial apparatus. That Disney Corp can prevent you from viewing some collection of pixels, because THEY own it, and they know better than you do about the culture and communication that you are and are not allowed to experience.
It's just baffling. If they could, Disney would scan your brain to charge you a nickel every time you thought of Mickey Mouse.
Training a model is not equivalent to training a human. Freedom of information for a mountain of graphics cards in a privately owned data center is not the same as freedom of information for flesh and blood human beings.
You could simplify rust slightly by sacrificing performance. For example you could box everything by default (like java) and get rid of `Box` the type as a concept. You could even make everything a reference counted pointer (but only allow mutation when the compiler can guarantee that the reference count is 1). You could ditch the concept of unsized types. Things like that. Rust doesn't strive to be the simplest language that it could be - instead it prefers performance. None of this is really what people complain about with the language though.
Personally I find it much easier to grok immutable data, not just understand when concentrating on it, then ownership rules.
Of course this will lead to conflict between Altman and Musk as they rush to entrench themselves within the current administration. This buyout offer could be an effective tactic to delay the pending funding from Softbank, and in turn the kick off of stargate, while DOGE gets up to speed. Even a short delay could be impactful in the early days of an aggressive and fickle administration.
==> what makes erlang runtimes so special which you don't get by common solutions for retries etc?
The Erlang runtime can start a scheduler for every core on a machine and, since processes are independent, concurrency can be achieved by spawning additional processes. Processes communicate by passing messages which are copied from the sender into the mailbox of the receiver.
As an application programmer all of your code will run within a process and passively benefit from these properties. The tradeoff is that concurrency is on by default and single threaded performance can suffer. There are escape hatches to run native code, but it is more painful than writing concurrent code in a single-threaded by default language. The fundamental assumption of Erlang is that it is much more likely that you will need concurrency and fault tolerance than maximum single thread performance.