To me it’s given:
- AI in it’s current state is ruthless in achieving its goal
- Providers tune ruthlessness to get stronger AIs versus the competitor
- Humans can’t evaluate all consequences of the seeds they’ve planted.
Collateral and reckless damage is guaranteed at this point.
Combined with now giving some AIs the ability to kill humans, this is gonna be interesting..
We could stop it, but we wont
It's industrialization and mechanized warfare all over again
I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.
It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.
It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.
So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.
Yes, and that's a problem. If the advent of coding agents leads to people that are only in it for the money staying away from higher education - good. Those people are the reason why higher education turned to shit anyway and maybe it will be a nice change when people go into higher ed out of curiosity and not because they smell money.
Would you ever be tempted to make such a claim (that everyone is close to the same in ability and effort is the main determiner of success) about athletes? It's so obviously untrue that it's laughable. Why would you think that mental ability is magically distributed evenly?
There will always be people who are more motivated and capable of consolidating power. That cannot be stopped.
Capitalism and democracy (both with guardrails) are meant to harness and contain that energy such that it doesn't instantly destroy a society.
Religion goes in there somewhere too.
None of these systems of organization are perfect, and they don't seem ideal on the surface. When you see them in practice there are many flaws.
But they are feasible.
Your 'distributism' system doesn't pass the feasibility test.
Well, with that attitude, it will definitely remain impossible.
"AI makes human labor obsolete"
Given comparative advantage gives a offramp to this for a lot of what we currently understand as "economics", if the author is positing that we will be beyond this, then your response is missing the forest from the trees.