Technology moves forward and productivity improves for those that move with it.
Just by having lived longer, they might've had the chance to develop some intuition about the true cost of disruption, and about how whatever Google's doing is not a free lunch. Of course, neither them, nor you (nor I for that matter) had been taught the conceptual tools to analyze some workings of some Ivy League whiz kinds that have been assigned to be "eating the world" this generation.
Instead we've been incentivized to teach ourselves how to be motivated by post-hoc rationalizations. And ones we have to produce at our own expense too. Yummy.
Didn't Saint Google end up enshittifying people's very idea of how much "all of the world's knowledge" is; gatekeeping it in terms of breadth, depth and availability to however much of it makes AdSense. Which is already a whole lot of new useful stuff at your fingertips, sure. But when they said "organizing all of the world's knowledge" were they making any claims to the representativeness of the selection? No, they made the sure bet that it's not something the user would measure.
In fact, with this overwhelming amount of convincing non-experientially-backed knowledge being made available to everyone - not to mention the whole mass surveillance thing lol (smile, their AI will remember you forever) - what happens first and foremost is the individual becomes eminently marketable-to, way more deeply than over Teletext. Thinking they're able to independently make sense of all the available information, but instead falling prey to the most appealing narrative, not unlike a day trader getting a haircut on market day. And then one has to deal with even more people whose life is something someone sold to them, a race to the bottom in the commoditized activity (in the case of AI: language-based meaning-making).
But you didn't warn your parents about any of that or sit down and have a conversation about where it means things are headed. (For that matter, neither did they, even though presumably they've had their lives altered by the technological revolutions of their own day.) Instead, here you find yourself stepping in for that conversation to not happen among the public, either! "B-but it's obvious! G-get with it or get left behind!" So kind of you to advise me. Thankfully it's just what someone's paid for you to think. And that someone probably felt very productive paying big money for making people think the correct things, too, but opinions don't actually produce things do they? Even the ones that don't cost money to hold.
So if it's not about the productivity but about the obtaining of money to live, why not go extract that value from where it is, instead of breathing its informational exhaust? Oh, just because, figuratively speaking, it's always the banks have AIs that don't balk at "how to rob the bank"; and it's always we that don't. Figures, no? But they don't let you in the vault for being part of the firewall.
Would you say the same thing ("If it helped as much as its claimed, there wouldn't be any need to try and convince people they should be using it.") about the internet?
Thing's called a self-fulfilling prophecy. Next level to a MLM scheme: total bootstrap. Throwing shit at things in innate primate activity, use money to shift people's attention to a given thing for long enough and eventually they'll throw enough shit at the wall for something to stick. At which point it becomes something able to roll along with the market cycles.
What's new about it is that as the means of creating the requsite affective environment of hyperreality ("engagement") gradually became available not only to nation-states but to private parties (down to the individual amateur influencer starting with no political agenda), they managed to interpose themselves as the very means of our critique of themselves.
Which on the meta level is a genius, as it establishes whole new channels for inter-reality exchange (a.k.a. cultural osmosis) on a first-come-first-serve basis, knowledge arbitrators love those. Not everyone necessarily finds it too comfy to find themselves in the same ecosystem as said ingroup though. Hence also the equally elaborate evolution in faux-antisystemic reaction since the turn of the century (while they try to bury the potential of technology as means of critique and bottom-up social reengineering.)
I think in this case, the thought process was based on the experience with older, electro-mechanical machines where the most common failure modern was parts wearing out.
Since software can, indeed, not "wear out", someone made the assumption that it was therefore inherently more reliable.
Bureaucracy being (per Graeber 2006) something like the ritual where by means of a set of pre-fashioned artifacts for each other's sake we all operate at 2% of our normal mental capacities and that's how modern data-driven, conflict-averse societies organize work and distribute resources without anyone being able to have any complaints listened to.
>Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected. It also exemplifies what I have come to think of the defining feature of a utopian form of practice, in that, on discovering this, those maintaining the system conclude that the problem is not with the system itself but with the inadequacy of the human beings involved.
Most places where a computer system is involved in the administration of a public service or something of the caliber, has that been a grassroots effort, hey computers are cool and awesome let's see what they change? No, it's something that's been imposed in the definitive top-down manner of XX century bureaucracies. Remember the cohort of people who used to become stupid the moment a "thinking machine" was powered within line of sight (before the last uncomputed generation retired and got their excuse to act dumb for the rest of it)? Consider them in view the literally incomprehensible number of layers that any "serious" piece of software consists of; layers which we're stuck producing more of, when any software professional knows the best kind of software is less of it.
But at least it saves time and the forest, right? Ironically, getting things done in a bureaucratic context with less overhead than filling out paper forms or speaking to human beings, makes them even easier to fuck up. And then there's the useful fiction of "the software did it" that e.g. "AI agents" thing is trying to productize. How about they just give people a liability slider in the spinup form, eh, but nah.
Wanna see a miracle? A miracle is when people hype each other into pretending something impossible happened. To the extent user-operated software is involved in most big-time human activities, the daily miracle is how it seems to work well enough, for people to be able to pretend it works any good at all. Many more than 3 such cases. But of course remembering the catastrophal mistakes of the past can be turned into a quaint fun-time activity. Building things that empower people to make less mistakes, meanwhile, is a little different from building artifacts for non-stop "2% time".
Deleted Comment
> If the platform really needs to watch every cycle that tightly, you aren't going to be a general purpose platform, and you might as well just make a monolithic C++ embedded application, rather than a whole new platform that is very likely to have a low shelf life as the hardware platform evolves.
Which I think is agreeable, up to a certain point, because I think it's potentially naive. That monolithic C++ embedded application is going to be fundamentally built out of a scheduler, IO and driver interfaces, and a shell. That's the only sane way to do something like this. And that's an operating system.
Exactly! I picture the choice being grandfathering in compatibility with existing OSes (having the promised performance of their product in fact indirectly modulated by the output of all other teams of world's smartest throughout computing history and present day), vs wringing another OS-sized piece of C++ tech debt upon unsuspecting humanity. In which case I am thankful to Carmack for making the call.
I can understand how "what you're doing is fundamentally pointless" is something they can only afford to hear from someone who already has their degree of magnitude of fuck-you money. Furthermore in a VC-shaped culture it can also be a statement that's to many people fundamentally incomprehensible