Readit News logoReadit News
gdubs · 11 days ago
AI has been improving at a very rapid pace, which means that a lot of people have really outdated priors. I see this all the time online where people are dismissive about AI in a way that suggests it's been a while since they last checked-in on the capabilities of models. They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since. Or they talk about hallucination and haven't tried Deep Research as an alternative to traditional web-search.

Then there's a tendency to be so 'anti' that there's an assumption that anyone reporting that the tools are accomplishing truly impressive and useful things must be an 'AI booster' or shill. Or they assume that person must not have been a very good engineer in the first place, etc.

Really is one of those examples of the quote, "In the beginner's mind there are many possibilities, but in the expert's mind there are few."

It's a rapidly evolving field, and unless you actually spend some time kicking the tires on the models every so often, you're just basing your opinions on outdated experiences or what everyone else is saying about it.

jdoliner · 11 days ago
I feel like I see these two opposite behaviors. People who formed an opinion about AI from an older model and haven't updated it. And people who have an opinion about what AI will be able to do in the future and refuse to acknowledge that it doesn't do that in the present.

And often when the two are arguing it's tricky to tell which is which, because whether or not it does something isn't totally black and white, there's some things it can sometimes do, which you can argue either way about that being in its capabilities or not.

forgotTheLast · 10 days ago
I.e. people who look at f(now) and assume it'll be like this forever against people who look at f'(now) and assume it'll improve like this forever
nchmy · 10 days ago
Another very significant cohort is people who formed a negative opinion without even the slightest interest in genuinely trying to learn how to use it (or even trying at all)
thegrim33 · 11 days ago
To play devil's advocate, how is your argument not a 'no true scottsman' argument? As in, "oh, they had a negative view of X, well that's of course because they weren't testing the new and improved X2 model which is different". Fast forward a year .. "Oh, they have a negative view on X2, well silly them, they need to be using the Y24 model, that's where it's at, the X2 model isn't good anymore". Fast forward a year .. ad infinitum.

Are the models that exist today a "true scottsman" for you?

xwowsersx · 10 days ago
It's not a No True Scotsman. That fallacy redefines the group to dismiss counterexamples. The point here is different: when the thing itself keeps changing, evidence from older versions naturally goes stale. Criticisms of GPT-3.5 don't necessarily hold against GPT-4, just like reviews of Windows XP don't apply to Windows 11.
vlovich123 · 11 days ago
How is that different than the models today are actually usable for non trivial things and more capable than yesterdays and it’s also true that tomorrow’s models will also probably be more capable than today’s?

For example, I dismissed AI three years ago because it couldn’t do anything I needed it to. Today I use it for certain things and it’s not quite capable of other things. Tomorrow it might be capable of a lot more.

Yes, priors have to be updated when the ground truth changes and the capabilities of AI change rapidly. This is how chess engines on supercomputers were competitive in the 90s then hybrid systems became the leading edge competitive and then machines took over for good and never looked back.

Mars008 · 10 days ago
There is another big and growing group: charlatans (influencers). People who don't know much but make bold statements, select 'proof' cases. Just to get attention. There are many of them on youtube. When you someone on thumbnail making faces this is most likely it.
trinsic2 · 10 days ago
Here[0] is a perfect example of this. There are so many youtubers making videos about the future of AI as a dooms-day prediction. Its kind of irresponsible actually. These youtubers read a book on the the down fall of humanity because of AGI. Many of these authors seem like they are repeating the Terminator/Skynet themes. Because of all this false information, It's hard to believe anything that is being said about the future of AI on youtube now.

[0]: https://www.youtube.com/watch?v=5KVDDfAkRgc

resource0x · 10 days ago
> There are many of them on youtube.

Not as many as on HN. "Influencers" have agendas and the stream of income, or other self-interest. HN always comes off as a monolith, on any subject. Counter-arguments get ignored and downvoted to oblivion.

barrell · 10 days ago
There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.

There are also those of us who have used them substantially, and seen the damage that causes to a codebase in the long run (in part due to the missing gains of having someone who understands the codebase).

There are also those of us who just don’t like the interface of chatting with a robot instead of just solving the problem ourselves.

There are also those of us who find each generation of model substantially worse than the previous generation, and find the utility trending downwards.

There are also those of us who are concerned about the research coming out about the effects of using LLMs on your brain and cognitive load.

There are also those of us who appreciate craft, and take pride in what we do, and don’t find that same enjoyment/pride in asking LLMs to do it.

There are also those of us who worry about offloading our critical thinking to big corporations, and becoming dependent on a pay-to-play system, that is current being propped up by artificially lowered prices, with “RUG PULL” written all over them.

There are also those of us who are really concerned about the privacy issues, and don’t trust companies hundreds of billions of dollars in debt to some of the least trust worth individuals with that data.

Most of these issues don’t require much experience with the latest generation.

I don’t think the intention of your comment was to stir up FUD, but I feel like it’s really easy for people to walk away with that from this sort of comment, so I just wanted to add my two cents and tell people they really don’t need to be wasting their time every 6 weeks. They’re really not missing anything.

Can you do more than a few weeks ago? Sure? Maybe? But I can also do a lot more than I was able to a few weeks ago as well not using an LLM. I’ve learned and improved myself.

Chances are if you’re not already using an LLM it’s because you don’t like it, or don’t want to, and that’s really ok. If AGSI comes out in a few months, all the time you would have invested now would be out of date anyways.

There’s really no rush or need to be tapped in.

bigstrat2003 · 10 days ago
> There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.

Yep, this is me. Every time people are like "it's improved so much" I feel like I'm taking crazy pills as a result. I try it every so often, and more often than not it still has the same exact issues it had back in the GPT-3 days. When the tool hasn't improved (in my opinion, obviously) in several years, why should I be optimistic that it'll reach the heights that advocates say it will?

libraryofbabel · 10 days ago
There’s really three points mixed up in here.

1) LLMs are controlled by BigCorps who don’t have user’s best interests at heart.

2) I don’t like LLMs and don’t use them because they spoil my feeling of craftsmanship.

3) LLMs can’t be useful to anyone because I “kick the tires” every so often and am underwhelmed. (But what did you actually try? Do tell.)

#1 is obviously true and is a problem, but it’s just capitalism. #2 is a personal choice, you do you etc., but it’s also kinda betting your career on AI failing. You may or may not have a technical niche where you’ll be fine for the next decade, but would you really in good conscience recommend a juniorish web dev take this position? #3 is a rather strong claim because it requires you to claim that a lot of smart reasonable programmers who see benefits from AI use are deluded. (Not everyone who says they get some benefit from AI is a shill or charlatan.)

dmead · 11 days ago
Is there anything you can tell me that will help me drop the nagging feeling that gradient descent trained models will just never be good?

I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.

Maybe I'm being overly cynical, but a lot of this stinks.

atleastoptimal · 10 days ago
The thing is AI is already "good" for a lot of things. It all depends on your definition of "good" and what you require of an AI model.

It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.

Zacharias030 · 11 days ago
Wouldn’t you say that now, finally, what people call AI combines subsymbolic systems („gradient descent“) with search and with symbolic systems (tool calls)?

I had a professor in AI who was only working on symbolic systems such as SAT-solvers, Prolog etc. and the combination of things seems really promising.

Oh, and what would be really nice is another level of memory or fast learning ability that goes beyond burning in knowledge through training alone.

int_19h · 9 days ago
> It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.

If you gave a random sci-fi writer from 1960s access to Claude, I'm fairly sure they wouldn't have any doubts over whether it is AI or not. They might argue about philosophical matters like whether it has a "soul" etc (there's plenty of that in sci-fi), but that is a separate debate.

globnomulous · 5 days ago
I belong to the camp for whom the capabilities of the models are irrelevant.

The largest and most successful models all appear to have been built unethically. I want nothing to do with these companies or the slime who run them, and I will leave my professional field before I'll become their unwilling user.

I don't want machines to write my code. I want to write it. I want to solve the problems and find the bugs myself. The engineers I work with who seem to rely most heavily on these tools all seem to be losing their sharpness and problem-solving ability. Many of them praise the models for making it easy to write tests..(Software engineers who treats testing with this kind of carelessness and dismissiveness should lose their jobs.)

I like what I do. I like the way I do it. The day I have no choice but do my work, in essence, by scheduling a fucking meeting with a chipper chat bot and telling it what to do will be the day I retire and start a new career. I can't imagine a drearier way to work with technology.

analog31 · 10 days ago
There's a middle ground which is to watch and see what happens around us. Is it unholy to not have an opinion?
scotty79 · 10 days ago
> They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since.

I feel like I see now more dismissive comments than previously. As if people, initially confused, formed a firm belief since. And now new facts don't really change it, just entrench them in chosen belief.

CPLX · 11 days ago
I agree with you. I am a perpetual cynic about new technology (and a GenXer so multiply that by two) and I have deeply embraced AI in all parts of my business and basically am engaging with it all day for various tasks from helping me compare restaurant options to re-tagging a million contact records in salesforce.

It’s incredibly powerful and will just clearly be useful. I don’t believe it’s going to replace intelligence or people but it’s just obviously a remarkable tool.

But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism. Crypto was and is just a giant and elaborate grift, to name one example. Also guys like Altman are clearly overstating the current trajectory.

The dismissive response does come with some context attached.

parineum · 11 days ago
> But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism.

They are still full of shit about LLMs, even if it is useful.

9rx · 10 days ago
> They wrote off the coding ability of ChatGPT on version 3.5, for instance

I found I had better luck with ChatGPT 3.5's coding abilities. What the newer models are really good at, though, is doing the high level "thinking" work and explaining it in plain English, leaving me to simply do the coding.

on_the_train · 10 days ago
But the reports are from shills. The impact of ai is almost non existent. The greatest impact it had was on role-playing. It's hardly even useful for coding.

And that all wouldn't be a problem if it wasn't for the wave of bots that makes the crypto wave seem like child's play.

loandbehold · 10 days ago
I don't understand people who say AI isn't useful for coding. Claude Code improved my productivity 10x. I used to put solid 8 hours a day in my remote software engineering job. Now I finish everything in 2 hours and go play with my kids. And my performance is better than before.
lopatin · 10 days ago
> They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since.

> It's hardly even useful for coding.

I’m curious what kind of projects you’re writing where AI coding agents are barely useful.

It’s the “shills” on YouTube that keep me up to date with the latest developments and best practices to make the most of these tools. To me it makes tools like CC not only useful but indispensable. Now I do not focus on writing the thing, but I focus on building agents who are capable of building the thing with a little guidance.

research_pie · 10 days ago
I think one of the issue is also the sheer amount of shilling going on like crypto level

I got a modest tech following and you wouldn’t believe the amount I’m offered to promote the most garbage AI company

LennyHenrysNuts · 9 days ago
And that's why I keep checking back in.

They're still pretty dumb if you want the to do anything (ie with MCPs) but they're not bad at writing and code.

libraryofbabel · 10 days ago
I do see this a lot. It's hard to have a reasonable conversation about AI amidst, on the one hand, hype-mongers and boosters talking about how we'll have AGI in 2027 and all jobs are just about to be automated away, and on the other hand, a chorus of people who hate AI so much they have invested their identify in it failing and haven't really updated their priors since ChatGPT came out. Both groups repeat the same set of tired points that haven't really changed much in three years.

But there are plenty of us who try and walk a middle course. A lot of us have changed our opinions over time. ("When the facts change, I change my mind.") I didn't think AI models were much use for coding a year ago. The facts changed. (Claude Code came out.) Now I do. Frankly, I'd be suspicious of anyone who hasn't changed their opinions about AI in the last year.

You can believe all these things at once, and many of us do:

* LLMs are extremely impressive in what they can do. (I didn't believe I'd see something like this in my lifetime.)

* Used judiciously, they are a big productivity boost for software engineers and many other professions.

* They are imperfect and make mistakes, often in weird ways. They hallucinate. There are some trivial problems that they mess up.

* But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.

* AI will change the world in the next 20 years

* But AI companies are overvalued at the present time and we're mostly likely in a bubble which will burst.

* Being in a bubble doesn't mean the technology is useless. (c.f. the dotcom bubble or the railroad bubble in the 19th century.)

* AGI isn't just around the corner. (There's still no way models can learn from experience.)

* A lot of people making optimistic claims about AI are doing it for self-serving boosterish reasons, because they want to pump up their stock price or sell you something

* AI has many potential negative consequences for society and mental health, and may be at least as nasty as social media in that respect

* AI has the potential to accelerate human progress in ways that really matter, such as medical research

* But anyone who claims to know the future is just guessing

IX-103 · 10 days ago
> But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.

I've not seen anything from a model to persuade me they're not just stochastic parrots. Maybe I just have higher expectations of stochastic parrots than you do.

I agree with you that AI will have a big impact. We're talking about somewhere between "invention of the internet" and "invention of language" levels of impact, but it's going to take a couple of decades for this to ripple through the economy.

dvfjsdhgfv · 10 days ago
> AI will change the world in the next 20 years

Well, it's been changing the world for quite some time, both in good and bad ways. There is no need to add an arbitrary timestamp.

kelseyfrog · 10 days ago
There's three important beliefs at play in the A(G)I story:

1. When(if) AGI will arrive. It's likely going to be smeared out over a couple months to years, but relative to everything else, it's a historical blip. This really is the most contention belief with the most variability. It is currently predicted to be 8 years[1].

2. What percentage of jobs will be replaceable with AGI? Current estimates between 80-95% of professions. The remaining professions "culturally require" humans. Think live performance, artisanal goods, in-person care.

3. How quickly will AGI supplant human labor? What is the duration of replacement from inception to saturation? Replacement won't happen evenly, some professions are much easier to replace with AGI, some much more difficult. Let's estimate a 20-30 years horizon for the most stubborn to replace professions.

What we have is a ticking time bomb of labor change at least an order of magnitude greater than the transition from an agricultural economy to an industrial economy or from an industrial economy to a service economy.

Those happened over the course of several generations. Society: culture, education, the legal system, the economy, where able to absorb the changes over 100-200 years. Yet we're talking about a change on the same scale happening 10 times faster - within the timeline of one's professional career. And still, with previous revolutions we had incredible unrest, and social change. Taken as a whole, we'll have possibly the majority of the economy operating outside the territory of society, the legal system, and the existing economy. A kid born on the the "day" AGI arrives will become an adult in a profoundly different world as if born on a farm in 1850 and reaching adulthood in a city in 2000.

1. https://www.metaculus.com/questions/5121/date-of-artificial-...

semi-extrinsic · 10 days ago
Your only reference [1] is to a page where anybody in the world can join and vote. It literally means absolutely nothing.

For [2] you have no reference whatsoever. How does AI replace a nurse, a vet, a teacher, a construction worker?

btilly · 11 days ago
In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.

This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?

I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.

chrisco255 · 11 days ago
> but which can be trained to the new job opportunities more easily than humans can

What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.

And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.

Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).

schneems · 11 days ago
A few months ago I saw one driverless car maybe every three days. Now I see roughly 3-5 every day.

I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.

Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.

I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.

socalgal2 · 11 days ago
Do you live in SF (the city, not the Bay Area as a whole) or West LA? I ask because in these areas you can stand on any city street and see several self driving cars go by every few minutes.

It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.

Once they're let on the freeways their usage will expand even faster.

einarfd · 11 days ago
Driverless taxis is IMO the wrong tech to compare to. It’s a high consequence, low acceptance of error, real time task. Where it’s really hard to undo errors.

There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.

mofeien · 11 days ago
> What makes you think that? Self driving cars [...]

AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.

The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?

And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.

motorest · 11 days ago
> What makes you think that? Self driving cars have had (...)

I think you're confusing your cherry-picked comparison with reality.

LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.

Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.

> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)

Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.

If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?

That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.

And what are you going to do, them? Drive a Uber?

andrei_says_ · 11 days ago
As someone who lives in LA, I don’t think self-driving cars existed at the time of the Rodney King LA riots and I am not aware of any other riots since.
rafaelero · 11 days ago
I feel like you are trapped in the first assessment of this problem. Yes, we are not there yet, but have you thought about the rate of improvement? Is that rate of improvement reliable? Fast? That's what matters, not where we are today.
atleastoptimal · 11 days ago
Everything anyone could say about bad AI driving could be said about bad human drivers. Nevertheless, Waymo has not had a single fatal accident despite many millions of passenger miles and is safer than human drivers.
andrepd · 11 days ago
To be fair, self-driving cars don't need to be perfect 0 casualty modes of transportation, they just need to be better than human drivers. Since car crashes kill 2 million people each year (and maim another 2 or 3), this is a low bar to clear...

Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.

bloaf · 11 days ago
I think it is important to remember that "decades" here means <20 years. Remember that in 2004 it was considered sufficiently impossible that basically no one had a car that could be reliably controlled by a computer, let alone driven by computer alone:

https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2004)

I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:

* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)

* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.

* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)

* It must function in a wide range of environments: there is no "standard" environment

If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:

* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.

* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.

* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.

* Operating environments are more standardized. All these jobs operate indoors with decent lighting.

perryizgr8 · 11 days ago
> They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots

All of this is very common for human driven cars too.

selimnairb · 11 days ago
> A human driver is still far more adaptive and requires a lot less training than AI

I get what you are saying, but humans need 16 years of training to begin driving. I wouldn’t call that not a lot.

CamperBob2 · 11 days ago
Self-driving cars are a political problem, not a technical problem. A functioning government would put everything from automation-friendly signaling standards to battery-swapping facilities into place.

We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.

GoatInGrey · 11 days ago
Snarky but serious question: How do we know that this wave will disrupt labor at all? Every time I dig into a story of X employees replaced by "AI", it's always in a company with shrinking revenues. Furthermore, all of the high-value use cases involve very intense supervision of the models.

There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.

I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.

closewith · 11 days ago
> instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.

This alone is enough to completely reorganise the labour market, as it describe an enormous number of roles.

drooby · 11 days ago
Carpenters, landscapers, roofers, plumbers, electricians, elderly care, nurses, cooks, servers, bakers, musicians, actors, artists...

Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.

Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.

Most goes to housing, healthcare, and transportation.

Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.

But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.

Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.

Or perhaps in the future everyone will work in finance. Everyone's a corporation.

Ramble ramble ramble

nlawalker · 11 days ago
> Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.

I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.

ares623 · 11 days ago
Well when i get unemployable i will start upskilling to an electrician. And so will hundreds of thousands like me.

That will do very well to salaries I think and everyone will be better of.

drivebyhooting · 11 days ago
Those jobs don’t pay particularly well today, and many have poor working conditions that strain the body.

Imagine what they’ll be like with an influx of additional laborers.

xpe · 11 days ago
I would be cautious to avoid any narrative anchoring on “old versus new” professions. I would seek out other ways of thinking about it.

For example, I predict humans will maintain competitive advantage in areas where the human body excels due to its shape, capabilities, or energy efficiency.

000ooo000 · 11 days ago
What this delusion seems to turn a blind eye to is that a good chunk of the population is already in those roles; what happens when the supply of those roles far exceeds the demand, in a relatively short time? Carpenters suddenly abundant, carpenter wages drop, carpenters struggling to live, carpenters forced to tighten spending, carpenters decide children aren't affordable.. now extrapolate that across all of the impacted roles and industries. No doubt someone is already typing "carpenters can retrain too!" OK, so they're back to entry level wages (if anything) for 5+ years? Same story. And retrain to what?

At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.

ozim · 11 days ago
I just have to see how you get let’s say 100k copywriters trained to be carpenters.

You also force them to move to places where there is less carpenters?

idiotsecant · 11 days ago
In your example i think it's a great deal more likely that the Uber driver is paid a tiny stipend to supervise a squad of gardening androids owned at substantial expense by Amazon Yard.
ajmurmann · 11 days ago
That healthcare jobs will be safe is nice on the surface but also means that while other jobs become more scarce cost of healthcare will continue to go up.
OneMorePerson · 11 days ago
Far from an expert on this topic, but what differentiates AI from other non physical efficiency tools? (I'm actually asking not contesting).

Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).

From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.

xpe · 11 days ago
> what differentiates AI from other non physical efficiency tools?

At some point: (1) general intelligence; i.e. adaptivity; (2) self replication; (3) self improvement.

marstall · 11 days ago
every software company i've ever worked with has an endless backlog of features it wants/needs to implement. Maybe AI just lets them move through these feature more quickly?

I mean most startups fail. And in software startups, the blame for that is usually at least shared by "software wasn't good enough". So that $20million seed investment is still going to go into "software development" - ie programmer salaries. they will be using the higher level language of ai much of the time, and be 2-5 times more efficient - but will it be enough? No. Most will still fail.

xpe · 11 days ago
Companies don’t always compete on capability or quality. Sometimes they compete on efficiency. Or sometimes they carve up the market in different ways.
nine_k · 11 days ago
> And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.

I think it did not work like that.

Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)

Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.

The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.

All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.

marstall · 11 days ago
temporary - thats the key. people were able to move to the cities and get factory and office jobs and over time were much better off. I can complain about the socially alienated condition I'm in as an office worker, but I would NEVER want to do farm work - cold/sun, aching back, zero benefits, low pay, risk of crop failure, a whole other kind of isolation etc etc.
bsder · 11 days ago
> we wound up with more and better jobs.

You will have to back that statement up because this is not at all obvious to me.

If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.

Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.

moffkalast · 11 days ago
This was already a problem back then, Nixon was about to introduce UBI in the late 60s and then the admin decided that having people work pointless jobs keeps them better occupied, and the rest of the world followed suit.

There will be new jobs and they will be completely meaningless busywork, people performing nothing of substance while being compensated for it. It's our way of doing UBI and we've been doing it for 50 years already.

Obligatory https://wtfhappenedin1971.com

aurareturn · 11 days ago

  This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
Assuming AI doesn't get better than humans at everything, humans will be supervising and directing AIs.

Refreeze5224 · 11 days ago
That sounds like a job for a very small number of people. Where will everyone else work?
jmathai · 11 days ago
I’m not sure if that’s meant to be reassuring or not.

It’s hard for me to imagine that AI won’t be as good or better than me at most things I do. It’s quite a sobering feeling.

bambax · 11 days ago
One way to think about AI and jobs is Uber/Google Maps. You used to have to know a lot about a city to be a taxi driver; then suddenly with Google Maps you don't. So in effect, technology lowered the requirements or training needed to become a taxi driver. More people can do it, not less (although incumbents may be unhappy about this).

AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.

Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.

brap · 11 days ago
Even with Google Maps, we still need human drivers because current AI systems aren’t so great at driving and/or are too expensive to be widely adopted at this point. Once AI figures out driving too, what do we need the drivers for?

And I think that’s the point he was making, it’s hard to imagine any task where humans are still required when AI can do it better and cheaper. So I don’t think the Uber scenario is realistic.

I think the only value humans can provide in that future is “the human factor”: knowing that something is done by an actual human and not a machine can be valuable.

People want to watch humans playing chess, even though AI is better at it. They want to consume art made by humans. They want a human therapist or doctor, even if they heavily rely on AI for the technical stuff. We want the perspective of other humans even if they aren’t as smart as AI. We want someone that “gets” us, that experiences life the same way we do.

In the future, more jobs might revolve around that, and in industries where previously we didn’t even consider it. I think work is going to be mostly about engaging with each other (even more meetings!)

The problem is, in a world that is that increasingly remote, how do you actually know it’s a human on the other end? I think this is something we’ll need to solve, and it’s going to be hard with AI that’s able to imitate humans perfectly.

yomismoaqui · 11 days ago
Don't worry about the political leaders, if a sizeable amount of people lose their jobs they will surely ask GPT-10 how to build a guillotine.
HDThoreaun · 11 days ago
The french revolution did not go well for the average french person. Not sure guillotines are the solution we need.
mips_avatar · 11 days ago
We have to also choose to build technology that empowers people. Empowering technologies don't just pop into existence, they're created by people who care about empowering people.
antirez · 11 days ago
I too believe that a mostly autonomous work world would be something we could handle well assuming the leadership was composed of smart folks picking the right decisions, without also being too much exposed to external powers opposing an impossible to win force (large companies and interests). The problem is if we mix what could happen (not clear when, right now) with the current weak leadership across the world.
LightBug1 · 11 days ago
As someone else said, until a company or individual is willing to risk their reputation on the accuracy of AI (beyond basic summarising jobs, etc), the intelligent monkeys are here for a good while longer. I've already been once bitten, twice shy.

The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.

fsflover · 11 days ago
Haven't you seen companies developing autonomous killing drones?
nikolayasdf123 · 11 days ago
> what do displaced humans transition to?

go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.

marstall · 11 days ago
that was because the economy was controlled/corrupt and not allowed to flourish (and create job-creating technologies like the internet and AI).
yaur · 11 days ago
I believe that historically we have solved this problem by creating gigantic armies and then killing off millions of people that couldn't really adapt to the new order with a world war.
bamboozled · 11 days ago
It’s probably the only technology that is designed to replace humans as its primary goal. It’s the VC dream.
xgkickt · 11 days ago
I do wonder if the amount they're spending on it is going to be cost effective versus letting humans continue doing the work.

Deleted Comment

kace91 · 11 days ago
>But as its capabilities improve, what do displaced humans transition to?

IF there is intellectual/office work that remains complex enough to not be tackled by AI, we compete for those. Manual labor takes the rest.

Perhaps that’s the shift we’ll see: nowadays the guy piling up bricks makes a tenth of the architects’ salary, that relation might invert.

And the indirect effects of a society that values intellectual work less are really scary if you start to explore the chain of cause and effect.

ACCount37 · 11 days ago
Have you noticed that there are a lot of companies now that are trying to build advanced AI-driven robots? This is not a coincidence.
azan_ · 11 days ago
The relation won’t invert because it’s very easy and quick to train guy pilling up bricks while training architect is slow and hard. If low skilled jobs will pay much better than high skilled then people will just change their job.
tomjen3 · 11 days ago
The industrial revolution took something like 98% of jobs and farms and just disappeared them.

Could you a priori in 1800 have predicted the existence of graphics artists? Street sweepers? People who drive school buses? The whole infrastructure around trains? Sewage maintainers? Librarians? Movie stuntmen? Sound Engineers? Truck drivers?

immibis · 11 days ago
The opening of new jobs has been causally unlinked from the closing of old jobs - especially when you take the quantity into consideration. There was a well of stuff people wanted to do, that they couldn't do because they were busy doing the boring stuff. But now that well of good new jobs is running dry, which is why we see people picking up 3 really shit jobs to make ends meet. There will be a point where new jobs do not open at all, and we should probably plan for that.
pzo · 11 days ago
I think UBI can only buy some time but won't solve the problem. We need fast improvement with AI robots that can be used for automation on mass scale: construction, farming maybe even cooking and food processing.

Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.

visarga · 11 days ago
> displace humans ...

AI can displace human work but not human accountability. It has no skin and faces no consequences.

> can be trained to the new job opportunities more easily ...

Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?

> what do displaced humans transition to?

Humans with all powerful AI in their pockets... what could they do if they lose their jobs?

9dev · 11 days ago
> ask that question to all the companies laying off junior folks in favor of LLMs right now. They are gleefully sawing off the branch they’re sitting on.

> Humans with all powerful AI in their pockets... what could they do if they lose their jobs?

At which point did AI become a free commodity in your scenario?

DrewADesign · 11 days ago
> AI can displace human work but not human accountability. It has no skin and faces no consequences.

We’ve gota way to go to get there in many instances. So far I’ve seen people blame AI companies for model output, individuals for not knowing the product sold to them as a magic answer-giving machine was wrong, and other authorities in those situations (e.g. managers, parents, school administrators and teachers,) for letting ai be used at all. From my vantage point, It people seem to be using it as a tool to insulate themselves from accountability.

azan_ · 11 days ago
> AI can displace human work but not human accountability. It has no skin and faces no consequences.

Let’s assume that we have amazing aj and robotics, better than humans at everything - if you could choose between robosurgery (completely automatic) with 1% mortality and for 5000 usd vs surgery performed by human with 10% mortality and 50000 usd price tag, would you really choose human just because you can sue him? I wouldn’t. I don’t thing anyone thinking rationally would.

ACCount37 · 11 days ago
Is the ability to burn someone at a stake for making a mistake truly vital to you?

If not, then what's the advantage of "having skin" is? Sure, you can't flog an AI. But AI doesn't need to be threatened with flogging to perform at the peak of its abilities. A well designed AI performs at the peak of its abilities always - and if that isn't enough, you train it until it is.

Matumio · 11 days ago
Those displaced workers need an income first, job second. What they were producing is still getting done. This means we have gained freedom to choose what else is worth doing. The immediate problem is the lack of income. There is no lack of useful work to do, it's just that most of it doesn't pay well.
ip26 · 11 days ago
For the moment, perhaps it could be jobs that LLMs can’t be trained on. New jobs, niche jobs, secret or undocumented jobs…

It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.

wouldbecouldbe · 11 days ago
Yeah but those opening of new kind of jobs has not always been instantly. It can take decades and for instance was one of the reasons for the French Revolution. Internet has already created a huge amount of monopolies and wealth concentration. AI seems likely to do this further.
nikolayasdf123 · 11 days ago
> what do displaced humans transition to?

we assume there must be something to transition to. very well, there can be nothing.

we assume people will transition. very well, they may not transition at all and "dissappear" en masse. (same effect as as a war or an empire collapse)

solumunus · 11 days ago
We also may not need to worry about it for a long time. I’m more and more falling on this side. LLM’s are hitting diminishing returns so until there’s a new innovation (can’t see any on the horizon yet) I’m not concerned for my career.
player1234 · 9 days ago
Improve how? And when? Give us the map. Making a prediction straight into sci fi territory and then become worried about that future is hella lame.
d2veronica · 11 days ago
During the Industrial Revolution, many who made a living by the work of their hands lost their jobs, because there were machines and factories to do their work. Then new jobs were created in factories, and then many of those jobs were replaced by robots.

Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.

Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.

At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.

There won’t be some big wealth redistribution until AI convinces people to do that.

The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.

jazzyjackson · 11 days ago
I don't know maybe they can grow trees and build houses.
seanmcdirmid · 11 days ago
The robots? I see this happening soon, especially for home construction.
chung8123 · 11 days ago
It makes me wonder if we will be much more reserved with our thoughts and teachings in the future given how quickly they will be used against us.
kbrkbr · 11 days ago
Here is another perspective:

> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs

That may well be why these technologies were ultimately successful. Think of millions and millions being cast out.

They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.

Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?

There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.

In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.

Dead Comment

Deleted Comment

keiferski · 11 days ago
There’s a simple flaw in this reasoning:

Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.

In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.

You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.

Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.

keiferski · 11 days ago
Just to further elaborate on this with another example: the writing industry. (Technical, professional, marketing, etc. writing - not books.)

The default logic is that AI will just replace all writing tasks, and writers will go extinct.

What actually seems to be happening, however, is this:

- obviously written-by-AI copywriting is perceived very negatively by the market

- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written

- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best

And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.

jaynetics · 11 days ago
As someone who used to be in the writing industry (a whole range of jobs), this take strikes me as a bit starry-eyed. Throw-away snippets, good-enough marketing, generic correspondence, hastily compiled news items, flairful filler text in books etc., all this used to be a huge chunk of the work, in so many places. The average customer had only a limited ability to judge the quality of texts, to put it mildly. Translators and proofreaders already had to prioritize mass over flawless output, back when Google Translate was hilariously bad and spell checkers very limited. Nowadays, even the translation of legal texts in the EU parliament is done by a fraction of the former workforce. Very few of the writers and none of the proofreaders I knew are still in the industry.

Addressing the wider point, yes, there is still a market for great artists and creators, but it's nowhere near large enough to accommodate the many, many people who used to make a modest living, doing these small, okay-ish things, occasionally injecting a bit of love into them, as much as they could under time constraints.

zarzavat · 11 days ago
The assumption here is that LLMs will never pass the Turing test for copywriting, i.e. AI writing will always be distinguishable from human writing. Given that models that produce intelligible writing didn't exist a few years ago, that's a very bold assumption.
Scarblac · 11 days ago
Seems a bit optimistic to me. Companies may well accept a lower quality than they used to get if it's far cheaper. We may just get shittier writing across the board.

(and shittier software, etc)

jhbadger · 11 days ago
>You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.

But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).

morsecodist · 11 days ago
I don't agree that it is because of the "quality" of the video. The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer. It is interesting because it has a consistent perspective. It is possible AI art could one day be indistinguishable but for people to care about it I feel they would need to lie and say it was made by a particular person or create some sort of persona for the AI. But there are a lot of people who want to do the work of making art. People are not the limiting factor, in fact we have way more people who want to make art than there is a market for it. What I think is more likely is that AI becomes a tool in the same way CGI is a tool.
keiferski · 11 days ago
CGI is a good analogy because I think AI and creators will probably go in the same direction:

You can make a compelling argument that CGI operators outcompeted practical effects operators. But CGI didn’t somehow replace the need for a filmmaker, scriptwriter, cinematographers, etc. entirely – it just changed the skillset.

AI will probably be the same thing. It’s not going to replace the actual job of YouTuber in a meaningful sense; but it might redefine that job to include being proficient at AI tools that improve the process.

djtango · 11 days ago
That's a Nolan thing like how Dunkirk used no green screen.

I think Harry Potter and Lord of the Rings embody the transition from old school camera tricks to CGI as they leaned very heavily into set and prop design and as a result have aged very gracefully as movies

yoz-y · 11 days ago
That said, the complaint is coming back. Namely because most new movies use an incredible amount of CGI and due to the time constraints the quality suffers.

As such, CGI is once again becoming a negative label.

I don’t know if there is an AI equivalent of this. Maybe the fact that as models seem to move away from a big generalist model at launch, towards a multitude of smaller expert models (but retaining the branding, aka GPT-4), the quality goes down.

__MatrixMan__ · 11 days ago
Do you get the feeling that AI generated content is lacking something that can be incrementally improved on?

Seems to me that it's already quite good in any dimension that it knows how to improve on (e.g. photorealism) and completely devoid of the other things we'd want from it (e.g. meaning).

Barrin92 · 11 days ago
>But that's because, at present, AI generated video isn't very good.

It isn't good, but that's not the reason. There's a paper about 10 years ago where people used some computer system to generate Bach-like music that even Bach experts couldn't reliably tell apart, but nobody listens to bot music. (or nobody except for engine programmers watches computer chess, despite superiority. Chess is thriving more now including commercially than it ever did)

In any creative field what people are after is the interaction between the creator and the content, which is why compelling personalities thrive more, not less in a sea of commodified slop (be that by AI or just churned out manually).

It's why we're in an age where twitch content creators or musicians are increasingly skilled at presenting themselves as authentic and personal. These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.

danielbln · 11 days ago
Ironically, while the non-CGI SFX in e.g. Interstellar looked amazing, that sad fizzle of a practical explosion in Oppenheimer did not do the real thing justice and would've been better served by proper CGI VFX.
antirez · 11 days ago
To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.
keiferski · 11 days ago
I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.

But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.

AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.

Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.

Wowfunhappy · 11 days ago
> Now ebook reading is switching to AI.

IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.

evanelias · 11 days ago
> Book and music album covers are often done with AI (even if this is most of the times NOT advertised)

This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".

It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.

spenrose · 11 days ago
Look at your examples. Translation is a closed domain; the LLM is loaded with all the data and can traverse it. Book and music album covers _don't matter_ and have always been arbitrary reworkings of previous ideas. (Not sure what “ebook reading” means in this context.) Math, where LLMs also excel, is a domain full of internal mappings.

I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.

apwell23 · 11 days ago
> Book and music album covers are often done with AI

These suck. Things made with AI just suck big time. Not only are they stupid but they have negative value on your product.

I cannot think of single purely AI made video, song or any form of art that is any a good.

All AI has done is falsely convince ppl that they can now create things that they had no skills to do before AI.

crote · 10 days ago
> Translations are more and more done exclusively with AI or with a massive AI help

As someone who speaks more than one language fairly well: We can tell. AI translations are awful. Sure, they have gotten good enough for a casual "let's translate this restaurant menu" task, but they are not even remotely close to reaching human-like quality for nontrivial content.

Unfortunately I fear that it might not matter. There are going to be plenty of publishers who are perfectly happy to shovel AI-generated slop when it means saving a few bucks on translation, and the fact that AI translation exists is going to put serious pricing pressure on human translators - which means quality is inevitably going to suffer.

An interesting development I've been seeing is that a lot of creative communities treat AI-generated material like it is radioactive. Any use of AI will lead to authors or even entire publishers getting blacklisted by a significant part of the community - people simply aren't willing to consume it! When you are paying for human creativity, receiving AI-generated material feels like you have been scammed. I wouldn't be surprised to see a shift towards companies explicitly profiling themselves as anti-AI.

onlyrealcuzzo · 11 days ago
It's becoming a negative label because they aren't as good.

I'm not saying it will happen, but it's possible to imagine a future in which AI videos are generally better, and if that happens, almost by definition, people will favor them (otherwise they aren't "better").

glhaynes · 11 days ago
I'm not on Facebook, but, from what I can tell, this has arguably already happened for still images on it. (If defining "better" as "more appealing to/likely to be re-shared by frequent users of Facebook.")
techpineapple · 11 days ago
I mean, I can imagine any future, but the problem with “created by AI” is that because it’s relatively inexpensive, it seems like it will necessarily become noise rather than signal, if a person could pop out a high quality video in a day, in which case signal will revert to the celebrity that is marketing it rather than the video itself.
yoavm · 11 days ago
Perhaps this will go the way the industrial revolution did? A knife handcrafted by a Japanese master might have a very high value, but 99.9% of the knives are mass produced. "Creators" will become artisans - appreciated by many, consumed by few.
danielvaughn · 11 days ago
Another flaw is that humans won’t find other things to do. I don’t see the argument for that idea. If I had to bet, I’d say that if AI continues getting more powerful, then humans will transition to working on more ambitious things.
johnecheck · 11 days ago
This is very similar to the 'machines will do all the work, we'll just get to be artists and philosophers' idea.

It sounds nice. But to have that, you need resources. Whoever controls the resources will get to decide whether you get them. If AI/machines are our entire economy, the people that control the machines control the resources. I have little faith in their benevolence. If they also control the political system?

You'll win your bet. A few humans will work on more ambitious things. It might not go so well for the rest of us.

Deleted Comment

bamboozled · 11 days ago
If it became magic smart, then I don’t see why we couldn’t use it to enhance ourselves and become Transhuman?
gopalv · 10 days ago
> because made by AI is becoming a negative label in a world

The negative label is the old world pulling the new one back, it rarely sticks.

I'm old enough to remember the folks saying "We used to have the paint the background blue" and "All music composers need to play an instrument" (or turn into a symbol).

d3nj4l · 11 days ago
> AI-generated videos are a mild amusement, not a replacement for video creators

If you seriously think this, you don’t understand the YouTube landscape. Shorts - which have incredible view times - are flooded with AI videos. Most thumbnails these days are made with AI image generators. There’s an entire industry of AI “faceless” YouTubers who do big numbers with nobody in the comments noticing. The YouTuber Jarvis Johnson made a video about how his feed has fully AI generated and edited videos with great view counts: https://www.youtube.com/watch?v=DDRH4UBQesI

What you’re missing is that most of these people aren’t going onto Veo 3, writing “make me a video” and publishing that; these videos are a little more complex in that they have separate models writing scripts, generating voiceover, and doing basic editing.

keiferski · 11 days ago
These videos and shorts are a fraction of the entire YouTube landscape, and actual creators with identities are making vastly, vastly more money - especially once you realize how YouTube and video content in general is becoming a marketing channel for other businesses. Faceless channels have functionally zero brand, zero longevity, and no real way to extend that into broader products in the way that most successful creators have done.

That was my point: someone that has an identity as a YouTuber shouldn’t worry too much about being replaced by faceless AI bot content.

MichaelZuo · 11 days ago
That’s the fundamental issue with most “analysis”, and most discussions really, on HN.

Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.

Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.

Partly because quality is measured relative to the average, and partly because the world really is getting more complex.

nprateem · 11 days ago
Oh come on. I may not be a genius but I can turn my mind to most things.

"I may not be a gynecologist, but I'll have a look."

Dead Comment

variadix · 11 days ago
Re: YT AI content. That is because AI video is (currently) low quality. If AI video generators could spit out full length videos that rivaled or surpassed the best human made content people wouldn’t have the same association. We don’t live in that world yet, but someday we might. I don’t think “human made” will be a desirable label for _anything_, videos, software, or otherwise, once AI is as good or better than humans in that domain.
j45 · 11 days ago
Poorly made videos are poorly made videos.

Whether poor videos made by a human directly, or poorly made by a human using AI.

The use of software like AI to create videos with sloppy quality and reaults reflects on their skill.

Currently the use of AI leans towards sloppy because of the lower digital literacy of content creators with AI, and once they get into it, realizing how much goes into videos.

andai · 11 days ago
This only works in a world where AI sucks and/or can be easily detected. I've already found videos where on my 2nd or 3rd time watching I went, "wait, that's not real!" We're starting to get there, which is frankly beyond my ability to reason about.

It's the same issue with propaganda. If people say a movie is propaganda, that means the movie failed. If a propaganda movie is good propaganda, people don't talk about that. They don't even realize. They just talk about what a great movie it is.

jostylr · 11 days ago
One thing to keep in mind is not so much that AI would replace the work of video creators for general video consumption, but rather it could create personalized videos or music or whatever. I experimented with creating a bunch of AI music [1] that was tailored to my interests and tastes, and I enjoy listening to them. Would others? I doubt it, but so what? As the tools get better and easier, we can create our own art to reflect our lives. There will still be great human art that will rise to the top, but the vast inundation of slop to the general public may disappear. Imagine the fun of collaboratively designing whole worlds and stories with people, such as with tabletop role-playing, but far more immersive and not having to have a separate category of creators or waiting on companies to release products.

1: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv...

_jab · 11 days ago
I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.

And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.

To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.

c0balt · 11 days ago
> in that every software engineer now depends heavily on copilots

That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.

Raphael_Amiard · 11 days ago
Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful
HDThoreaun · 10 days ago
Im on the core sql execution team at a database company and everyone on the team is using AI coding assistants. Certainly not doing any monkey-esque web programming.
galangalalgol · 11 days ago
Add LED lighting on there. It is easy to forget what a difference that made. The light pollution, but also just how dim houses were. CFL didn't last very long as a thing between incandescent and LED and houses lit with incandescents have a totally different feel.
mdaniel · 10 days ago
And yet: https://www.axios.com/2023/02/26/car-headlights-too-bright-l...

But, for clarity, I do agree with your sentiment about their use in appropriate situations, I just have an indescribable hatred for driving at night now

atleastoptimal · 11 days ago
AI has already rendered academic take-home assignments moot. No other tech has had an impact like that, even the internet.
callc · 11 days ago
A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring.

I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.

ZYbCRq22HbJ2y7 · 11 days ago
> AI has already rendered academic take-home assignments moot

Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that.

IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this).

devmor · 11 days ago
What? The internet did that ages ago. We just pretended it didn't because some students didn't know how to use Google.
Davidzheng · 11 days ago
On current societal impact it might be close to the other three. But do you not think it is different in nature to other technological innovations?
shayief · 11 days ago
> in that every software engineer now depends heavily on copilots

With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.

For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.

thomasfromcdnjs · 10 days ago
Pretty sure I read Economnics in one lesson because of HN, he makes great arguments about how automation never ruins economies as much as people think. "Chapter 7: The Curse of Machinery"

Deleted Comment

Deleted Comment

mmmore · 11 days ago
LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes.

Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.

srcreigh · 11 days ago
> Could we get there? Absolutely. We just haven't yet.

What else is needed then?

tymscar · 10 days ago
I don’t know what the answer to the Collatz conjecture is, but I know it’s not “carrot”.
legucy · 11 days ago
I’m skeptical of arguments like this. If we look at most impactful technologies since the year 1980, the Web is not even in my top 3. Personal computers, spreadsheet software, and desktop publishing have all done more to alter society and daily life than has the Web. And yes, I recognize that the Web has already created profound change, in that every researcher now depends heavily on online databases, in that commerce faces a major disruption challenge, and in that information access has been completely changed. I just don’t think those changes are on the same level as the normalization of powerful computers on everyone’s desk, as our business processes becoming increasingly digitized, nor as the enablement for small businesses to produce professional-quality documents without having to maintain expensive typesetting equipment. To me, the treating of the Web as “different” is still unsubstantiated. Could we get there? Absolutely. We just haven’t yet. But some people start to talk about it almost in a way that’s reminiscent of Pascal’s Wager, as if the slight chance of a godly reward from investing in Web technologies means it is rational to devote our all to it. But I’m still holding my breath.
m_a_g · 11 days ago
This is not reddit.
itsalotoffun · 11 days ago
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]

What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.

Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.

xpe · 11 days ago
>> Markets don’t want to accept that.

> What a silly premise. Markets don't care.

You read the top sentence way too literally. In context, it has a meaning — which can be explored (and maybe found) with charity and curiosity.

drcode · 10 days ago
Markets require property rights, property rights require institutions that are dependent on property-rights holders, so that they have incentives to preserve those property rights. When we get to the point where institutions are more dependent on AIs instead of humans, property rights for humans will become inconvenient.
xpe · 11 days ago
> All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.

I prefer the concepts and rigor from political economy: markets are both preference aggregators and coordination mechanisms.

Does your framing (voting machines and weighing machines) offer more clarity and if so, how? I’m not seeing it.

acivitillo · 11 days ago
His framing is that markets are collective consensus and if you claim to “know better”, you need to write a lot more than a generic post. It’s so simple, and it is a reminder that antirez’s reputation as a software developer does not automatically translate to economics expert.
cropcirclbureau · 11 days ago
Yes but can the market not be wrong? Wrong in the sense that, failing to meet our expectations as a useful engine of society? As I understood, what was meant with this this article is that AI completely changes the equations across the board that current market direction appears dangerously irrational to OP. I'm not sure what was meant with your comment though besides haggling over semantics and attacking some in-expertise of the authors socio-politic philosophizing that you perceive.
simgt · 11 days ago
Of course it can be wrong, and it is in many instances. It's a religion. The vast, vast majority of us would prefer to live in a stable climate with unpolluted water and some fish left in the oceans, yet "the market" is leading us elsewhere.
sota_pop · 11 days ago
> “… as a voting… as a weighing…” I’m sure I remember that as a graham, munger, or buffet quote.

> “not even wrong” - nice, one of my favorites from Pauli.

djeastm · 11 days ago
Definitely Benjamin Graham, though Buffett (two T's) brought it back
naveen99 · 11 days ago
Voting, weighing, … trading machine ? You can hear or touch or weigh colors.
atleastoptimal · 11 days ago
This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.

AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).

morsecodist · 11 days ago
> I do feel that there is a routine bias on HN to underplay AI

It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.

pmg101 · 11 days ago
It's a Rorschach test isn't it.

Because the technology itself is so young and so nebulous everyone is able to unfalsifiably project their own hopes or fears onto it.

atleastoptimal · 11 days ago
Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad.
tim333 · 10 days ago
I have the impression a lot depends on people's past reading and knowledge of what's going on. If you've read the likes of Kurzweil, Moravec, maybe Turing, you're probably going to treat AGI/ASI as inevitable. For people who haven't they just see these chatbots and the like and think those won't change things much.

It's maybe a bit like the early days of covid when the likes of Trump were saying it's nothing, it'll be over by the spring while people who understood virology could see that a bigger thing was on the way.

AIPedant · 11 days ago
> it's people not wanting to lose control or relative status in the world.

It's amazing how widespread this belief is among the HN crowd, despite being a shameless ad hominem with zero evidence. I think there are a lot of us who assume the reasonable hypothesis is "LLMs are a compelling new computing paradigm, but researchers and Big Tech are overselling generative AI due to a combination of bad incentives and sincere ideological/scientific blindness. 2025 artificial neural networks are not meaningfully intelligent." There has not been sufficient evidence to overturn this hypothesis and an enormous pile of evidence supporting it.

I do not necessarily believe humans are smarter than orcas, it is too difficult to say. But orcas are undoubtedly smarter than any AI system. There are billions of non-human "intelligent agents" on planet Earth to compare AI against, and instead we are comparing AI to humans based on trivia and trickery. This is the basic problem with AI, and it always has had this problem: https://dl.acm.org/doi/10.1145/1045339.1045340 The field has always been flagrantly unscientific, and it might get us nifty computers, but we are no closer to "intelligent" computing than we were when Drew McDermott wrote that article. E.g. MuZero has zero intelligence compared to a cockroach; instead of seriously considering this claim AI folks will just sneer "are you even dan in Go?" Spiders are not smarter than beavers even if their webs seem more careful and intricate than beavers' dams... that said it is not even clear to me that our neural networks are capable of spider intelligence! "Your system was trained on 10,000,00 outdoor spiderwebs between branches and bushes and rocks and has super-spider performance in those domains... now let's bring it into my messy attic."

thrw045 · 11 days ago
I think AI is still in the weird twilight zone that it was when it first came out in that it's great sometimes and also terrible. I still get hallucinations when I check a response I get with ChatGPT on Google.

On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.

I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information. How prevalent this phenomena is is hard to say but I still think it's pernicious.

But as I said before, there are still use cases for AI and that's what makes judging it so difficult.

wavemode · 11 days ago
I certainly understand why lots of people seem to believe LLMs are progressing towards beocming AGI. What I don't understand is the constant need to absurdly psychoanalyze the people who happen to disagree.

No, I'm not worried about losing "control or relative status in the world". (I'm not worried about losing anything, frankly - personally I'm in a position where I would benefit financially if it became possible to hire AGIs instead of humans.)

You don't get to just assert things without proof (LLMs are going to become AGI) and then state that anyone who is skeptical of your lack of proof must have something wrong with them.

iphone_elegance · 11 days ago
lmao, "underplay ai" that's all this site has been about for the last few years

Dead Comment

ahurmazda · 11 days ago
When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."
ares623 · 11 days ago
This pisses me off so much.

So many engineers are so excited to work on and with these systems, opening 20 prs per day to make their employers happy going “yes boss!”

They think their $300k total compensation will give them a seat at the table for what they’re cheering on to come.

I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.

Unless you have your own fully stocked private bunker with security detail, you will be affected.

dsign · 11 days ago
Big fan of your argument and don't disagree.

If AI makes a virus to get rid of humanity, well we are screwed. But if all we have to fear from AI is unprecedented economic disruption, I will point out that some parts of the world may survive relatively unscathed. Let's talk Samoa, for example. There, people will continue fishing and living their day-to-day. If industrialized economies collapse, Samoans may find it very hard to import certain products, even vital ones, and that can cause some issues, but not necessarily civil unrest and instability.

In fact, if all we have to fear from AI is unprecedented economic disruption, humans can have a huge revolt, and then a post-revolts world may be fine by turning back the clock, with some help from anti-progress think-tanks. I explore that argument in more detail in this book: https://www.smashwords.com/books/view/1742992

sarchertech · 11 days ago
> Unless you have your own fully stocked private bunker with security detail, you will be affected.

If society collapses, there’s nothing to stop your security detail from killing you and taking the bunker for themselves.

I’d expect warlords to rise up from the ranks of military and police forces in a post collapse feudal society. Tech billionaires wouldn’t last long.

bongodongobob · 11 days ago
The same argument could be made for actual engineers working on steam engines, nuclear power, or semiconductors.

Make of that what you will.

owebmaster · 11 days ago
> I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.

And we are getting to a point that is us or them. Big tech is investing so much money on this that if they do not succeed, they will go broke.

voidhorse · 11 days ago
Yes. The complete irony in all software engineers enthusiasm for this tech is that, if the boards wishes come true, they are literally helping them eliminate their own jobs. It's like the industrial revolution but worse, because at least the craftsmen weren't also the ones building the factories that would automate them out of work.

Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.

flask_manager · 11 days ago
Here's the thing, I tend to believe that sufficiently intelligent and original people will always have something to offer others; its irrelevant if you imagine the others as the current consumer public, our corporate overlords, or the ai owners of the future.

There may be people who have nothing to offer others, once technology advances, but I dont think that anyone in current top % role would find themselves there.

Davidzheng · 11 days ago
There is no jobless utopia. Even if everyone is paid and well-off with high living standards. That is no world in which humans can thrive where everyone is retired and doing their own interests.
bravesoul2 · 11 days ago
Jobless means you dont need a job. But you'd make a job for yourself. Companies will offer interesting missions instead of money. And by mission I mean real missions like space travel.
ZYbCRq22HbJ2y7 · 11 days ago
A jobless utopia doesn't even come close to passing a smell test economically, historically, or anthropologically.

As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.

You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.

Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.

Ask anyone who has done any of those things if they believe in a "jobless utopia"?

Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.

Deleted Comment

silver_silver · 11 days ago
Realistically, a white collar job market collapse will not directly lead to starvation. The world is not 1930s America ethically. Governments will intervene, not necessarily to the point of fairness, but they will restructure the economy enough to provide a baseline. The question will be how to solve the biblical level of luxury wealth inequality without civil unrest causing us all to starve.
tim333 · 10 days ago
Assuming AI works well, I can't see any "empty stomach" stuff. It should produce abundance. People will probably have political arguments about how to divide it but it should be doable.

Deleted Comment

Dead Comment

siliconc0w · 11 days ago
I'm on team plateau, I'm really not noticing increasing competency in my daily usage of the major models. And sometimes it seems like there are regressions where performance drops from what it could do before.

There is incredible pressure to release new models which means there is incredible pressure to game benchmarks.

Tbh a plateau is probably the best scenario - I don't think society will tolerate even more inequality+ massive job displacement.

andai · 11 days ago
I think the current economy is already dreadful. So I don't have much desire to maintain that. But it's easier to break something further than to fix it, and god knows what AI is going to do to a system with so many feedback loops.