Readit News logoReadit News
decimalenough · 4 months ago
I'm generally an AI skeptic, but it seems awfully early to make this call. Aside from the obvious frontline support, artist, junior coder etc, a whole bunch of white collar "pay me for advice on X" jobs (dietician, financial advice, tax agent, etc), where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk.

Example: I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent. And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.

1vuio0pswjnm7 · 4 months ago
"I'm generally an AI skeptic, but it seems awfully early to make this call."

What call. Maybe some readers miss the (perhaps subtle) difference between "Generative AI is not ..." and "Generative Ai is not going to ..."

Then first can be based on fact, e.g., what has happened so far. The second is based on pure speculation. No one knows what will happen in the future. HN is continually being flooded with speculation, marketing, hype.

In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far. There is no "call" being made. Only an examination of what has heppened so far. Facts not opinions.

bredren · 4 months ago
> According to the study, "users report average time savings of just 2.8 percent of work hours" from using AI tools. That's a bit more than one hour per 40 hour work week.

Could be data is lagging as sibling comment said but this seems wildly difficult to report on a number like this.

It also doesn't take into account the benefits to colleagues of active users of LLMs (second order savings).

My use of LLMs often means I'm saving other people time because I can work through issues without communications loops and task switching. I can ask about much more important, novel items of discussion.

This is an important omission that lowers the paper's overall value and sets it up for headlines like this.

mac-attack · 4 months ago
> In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far.

What happened in 2023 and 2024 actually

Nitpicky but it's worth noting that last year's AI capabilities are not the April 2025 AI capabilities and definitely won't be the December 2025 capabilities.

It's using deprecated/replaced technology to make a statement, that is not forward projecting. I'm struggling to see the purpose. It's like announcing that the sun is still shining at 7pm, no?

edanm · 4 months ago
You've got a point.

But I think it's pretty clear what the parent poster means - that it's too early to imply that GenAI won't impact jobs.

Obviously you understood that that was the point parent wanted to make - because you argue against it.

1vuio0pswjnm7 · 4 months ago
Whether "AI" investements are "worth it" is not a question considered by the paper. The question it addresses is whether "AI chatbots have had a significant impact on earnings or recorded hours in an occupation." The paper makes no "call". It estimates that so far, there has been no significant impact. This is not a forecast or a prediction. It is a review of past results.
hn_throwaway_99 · 4 months ago
> "Generative AI is not ..."

But here's the thing - there is already plenty of documented proof of individuals losing their job to ChatGPT. This is an article from 2 years ago: https://www.washingtonpost.com/technology/2023/06/02/ai-taki...

Early on in a paradigm shift, when you have small moves, or people are still trying to figure out the tech, it's likely that individual moves are hard to distinguish from noise. So I'd argue that a broad-based, "just look at the averages" approach is simply the wrong approach to use at this point in the tech lifecycle.

FWIW, I'd have to search for it, but there were economic analyses done that said it took decades for the PC to have a positive impact on productivity. IMO, this is just another article about "economists using tools they don't really understand". For decades they told us globalization would be good for all countries, they just kinda forgot about the massive political instability it could cause.

> In contrast, this article, i.e., the paper it discusses, is based on what has happened so far.

Not true. The article specifically calls into question whether the massive spending on AI is worth it. AI is obviously an investment, so determine whether it's "worth it", you need to consider future outcomes.

colinmorelli · 4 months ago
This is the real value of AI that, I think, we're just starting to get into. It's less about automating workflows that are inherently unstructured (I think that we're likely to continue wanting humans for this for some time).

It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).

Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.

These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.

karpour · 4 months ago
I recently tried looking up something about local tax law in ChatGPT. It confidently told me a completely wrong rule. There are lots of sources for this, but since some probably unknowingly spread misinformation, ChatGPT just treated it as correct. Since I always verify what ChatGPT spits out, it wasn't a big deal for me, just a reminder that it's garbage in, garbage out.
dingnuts · 4 months ago
"Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].

> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs

No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!

There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.

And that's without getting into what happens when brand recommendations are baked into the training data.

0 https://link.springer.com/article/10.1007/s10676-024-09775-5

ozgrakkurt · 4 months ago
It can’t replace a human for support, it is not even close to replacing a junior developer. It can’t replace any advice job because it lies instead of erroring.

As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.

Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.

For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.

I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs

franticgecko3 · 4 months ago
I agree with most of your points but this one

>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs

No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.

I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs

atrus · 4 months ago
> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.

How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.

It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.

Trust but verify is still a good rule here, no matter the source, human or otherwise.

Workaccount2 · 4 months ago
I trust cutting edge models now far more than the ones from a few years ago.

People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.

However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.

As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.

econ · 4 months ago
> It can’t replace a human for support,

It doesn't have to. It can replace having no support at all.

It would be possible to run a helpdesk for a free product. It might suck but it could be great if you are stuck.

Support call centers usually work in layers. Someone to pick up the phone who started 2 days ago and knows nothing. They forward the call to someone who managed to survive for 3 weeks. Eventually you get to talk to someone who knows something but can't make decisions.

It might take 45 minutes before you get to talk to only the first helper. Before you penetrate deep enough to get real support you might lose an hour or two. The LLM can answer instantly and do better than tortured minimum wage employees who know nothing.

There may be large waves of similar questions if someone or something screwed up. The LLM can do that.

The really exciting stuff will come where the LLM can instantly read your account history and has a good idea what you want to ask before you do. It can answer questions you didn't think to ask.

This is specially great if you've had countless email exchanges with miles of text repeating the same thing over and over. The employee can't read 50 pages just to get up to speed on the issue, if they had the time you don't so you explain again for the 5th time that delivery should be on adress B not A and be on these days between these times unless it are type FOO orders.

Stuff that would be obvious and easy if they made actual money.

victorbjorklund · 4 months ago
NFT:s never had any real value. It was just speculation hoping some bigger sucker will come after you.

LLM:s create real value. I save a bunch of time coding with an LLM vs without one. Is it perfect? No, but it does not have to be for still creating a lot of value.

Are some people hyping it up too much? Sure, an reality will set in but it wont blow up. It will rather be like the internet. 2000s and everyone thought "slap some internet on it and everything will be solved". They overestimated the (shorterm) value of the internet. But internet was still useful.

steamrolled · 4 months ago
> It can’t replace a human for support

But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them, telcos use them, social media platforms use them.

I suspect what you're missing here is that LLMs here aren't replacing some Platonic ideal of CS. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.

> and it will blow up like NFTs

We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.

gokhan · 4 months ago
> I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs

Can't disagree more (on LLMs. NFTs are of course rubbish). I'm using them with all kinds of coding tasks with good success, and it's getting better every week. Also created a lot of documents using them, describing APIs, architecture, processes and many more.

Lately working on creating an MCP for an internal mid-sized API of a task management suite that manages a couple hundred people. I wasn't sure about the promise of AI handling your own data until starting this project, now I'm pretty sure it will handle most of the personal computing tasks in the future.

DoughnutHole · 4 months ago
> It can’t replace a human for support

It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.

A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.

The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.

vjvjvjvjghv · 4 months ago
LLM is already very useful for a lot of tasks. NFT and most other crypto has never been useful for anything other than speculation.
NeutralCrane · 4 months ago
> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.

Have you somehow managed to avoid the last several decades of human-sourced dieting advice?

chgs · 4 months ago
I tend to use ai for the same things I’d have used Google for in 2005.

Google is pretty much useless now as it changed into ann ad platform, and I suspect AI will go the same way soon enough.

flmontpetit · 4 months ago
It seems like an obvious thing on the surface, but I've already noticed that when asked questions on LLM usage (eg building RAG pipelines and whatnot), ChatGPT will exclusively refer you to OpenAI products.
dcchuck · 4 months ago
I actually paid for tax advice from one of those big companies (it was recommended - last time I will take that person's recommendations!). I was very disappointed in the service. It felt like the person I was speaking to on the phone would have been better of just echoing the request into AI. So I did just that as I waited on the line. I found the answer and the tax expert "confirmed" it.
ninetyninenine · 4 months ago
According to the article the Tax expert still has a job though.
notahacker · 4 months ago
A friend of mine is working on a "RAG for tax advisers" startup. He's selling it to the tax accountants, as a way for them to spot things they wouldn't otherwise have the time to review and generate more specialist tax advisory work. There's a lot more work they could do if only it's affordable to the businesses they advise (none of whom could do their own taxes even if they wanted to!)

Jevons law in action: some pieces of work get lost, but lower cost of doing work generates more demand overall...

vishnugupta · 4 months ago
I use it everyday to an extent that I’ve come to depend on it.

For copywriting, analyzing contracts, exploring my business domain, etc etc. Each of those tasks would have required me to consult with an expert a few years ago. Not anymore.

soared · 4 months ago
Similairly, while not perfect I use AI to help redesign my landscaping by uploading a picture of my yard and having it come up with different options.

Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.

Took a picture of my sprinkler box and had it figure out what was going on.

Potentially all situations where I would’ve paid (or paid more than I already was) a local laborer for that advice. Or at a minimum spent much more time googling for the info.

notTooFarGone · 4 months ago
So in the coming few years on the question whether or not to change your tires, a suggestions for shops in your area will come with a recommendation to change them. Do you think you would trust the outcome?
quickthrowman · 4 months ago
> Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.

You can use a penny and your eyeballs to assess this, and all it costs is $0.01

mmmBacon · 4 months ago
For the tire you can also use a penny. If you stick the penny in the tread with Liconln’s head down and his hair isn’t covered, then you need new tires. No AI. ;)
kbelder · 4 months ago
Now I think this conversation will happen in my lifetime:

Me: "Looks like your tire is a little low."

Youth: "How can you tell, where's your phone?"

loloquwowndueo · 4 months ago
Once they have you hooked they’ll start jacking up the prices.
jakubmazanec · 4 months ago
Awfully early? We have "useful" LLMs for almost three years. Where is the productivity increase? Also, your example is not very relevant. 1) Would you pay the professional if you couldn't find the answer yourself? 2) If search engines were still useful (I'm assuming you googled first), wouldn't they be able to find the official calculator too?
decimalenough · 4 months ago
LLMs have come a long way in those 3 years. The early ones were absolute garbage compared to what we have today.

1) Yes. I had in fact already booked the appointment, but decided to try asking Gemini for shits and giggles.

2) I did google, but I didn't know an official calculator existed, and the search results presented me with a morass of random links that I couldn't make heads or tails of. Remember, I'm not a domain expert in this area.

gilbetron · 4 months ago
How do you know it was correct without being a tax expert? And consulting a tax expert would give you legal recourse if it was wrong.
colinmorelli · 4 months ago
As for correctness, they mentioned the LLM citing links that the person can verify. So there is some protection at that level.

But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.

The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.

decimalenough · 4 months ago
I was looking to compute how much I can retroactively claim this year for a deduction I did not claim earlier. The LLM response pointed me to the tax office's calculator for doing exactly this, and I already knew the values of all the inputs the calculator wanted. So, yes, I'm confident it's correct.
andy99 · 4 months ago
I think in many cases, chatbots may make information accessible to people who otherwise wouldn't have it, like in the OP's case. But I'm more sceptical it's replacing experts in specialize subjects that had been previously making a living at them. They would be serving different markets.
conductr · 4 months ago
I’m a bit of a skeptic too and kind of agree on this. Also, the human employee displacement will be slow. It will start by not eliminating existing jobs but just eliminating the need for additional headcount, so it caps the growth of these labor markets. As it does that, the folks in the roles leveraging AI the most will start slowly stealing share of demand as they find more efficient and cheaper ways to perform the work. Meanwhile, core demand is shrinking as self service by customers is increasingly enabled. Then at some step pattern, perhaps the next global business cycle down turn, the headcount starts trending downward. This will repeat a handful of times, probably taking decades to be measured in aggregate by this type of study.
dismalaf · 4 months ago
Google (non-Gemini) has always been a great source for tax advice, at least here in Canada because, if nothing else, the government's website appears to leave all its pages available for indexing (even if it's impossible to navigate on its own).
HeyLaughingBoy · 4 months ago
> tax advice that would have cost hundreds of dollars to get from a licensed tax agent

But are those really the same? You're not paying the tax agent to give you the advice per se: even before Gemini, you could do your own research for free. You're really paying the tax agent to provide you advice that you can trust without having to go to the extra steps of doing deep research.

One of the most important bits of information I get from my tax agent is, "is this likely to get me audited if we do it?" It's going to be quite some time before I trust AI to answer that correctly.

freedomben · 4 months ago
Also, frequently you're buying some protection by using the licensed agent. If they give you bad advice, there's a person to go to and in the most extreme cases, maybe file a lawsuit or insurance claim against.
boredtofears · 4 months ago
I thought the value of using a licensed tax agent is that if they give you advice that ends up being bad, they have an ethical/professional obligation to clean up their mess.
cynicalsecurity · 4 months ago
Not consulting a real tax advisor is probably going to cost you much more.

I wouldn't be saving on tax advisors. Moreover, I would hire two different tax advisors, so I could cross check them.

ryandrake · 4 months ago
Most people’s (USA) taxes are not complex, and just require basic arithmetic to complete. Even topics like stock sales, IRA rollovers, HSAs, and rental income (which the vast majority of taxpayers don’t have) are straightforward if you just read the instructions on the forms and follow them. In 30 years of paying taxes, I’ve only had a tax professional do it once: as an experiment after I already did them myself to see if there was any difference in the output. I paid a tax professional $400 and the forms he handed me back were identical to the ones I filled out myself.
InfoHuman · 4 months ago
Those "pay me for advice on X" jobs have survived many other technological innovations. In the pre-internet time, there was an ever-expanding volume of books and articles you could read to get the advice. Then along came the world wide web, search engines, and apps - advice available from anywhere, anytime. And yet dieticians, financial advisors, lawyers, etc. have continued to be in demand.

Are LLMs finally going to be the breakthrough that changes this? Maybe. The anthropomorphized chat experience where you feel like you're talking to a person definitely helps. So does the confident style of LLM answers, which sound expert and definitive.

But I'm not sure it will work for everyone, or even for the majority. Many people lack the background knowledge, vocabulary, patience, and persistence to get good results from their own research. I've seen plenty of people who struggle to use Google effectively, and I suspect these same people will have trouble getting helpful answers on complex topics from an LLM.

gamblor956 · 4 months ago
Every month I run some trial law and tax questions past all the big AI chat bots.

They scored about 60% this month. That sounds good until you consider that 60% is a failing grade for the bar exam, and a licensed lawyer getting 40% of their legal advice wrong would be disbarred.

eqmvii · 4 months ago
Your tax example isn't far off from what's already possible with Google.

The legal profession specifically saw the rise of computers, digitization of cases and records, and powerful search... it's never been easier to "self help" - yet people still hire lawyers.

nzeid · 4 months ago
> Aside from the obvious frontline support, artist, junior coder etc, a whole bunch of white collar "pay me for advice on X" jobs (dietician, financial advice, tax agent, etc), where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk.

These examples aren't wrong but you might be overstating their impact on the economy as a whole.

E.g. the overwhelming majority of people do not pay solely for tax advice, or have a dietician, etc. Corporations already crippled their customer support so there's no remaining damage to be dealt.

Your tax example won't move the needle on people who pay to have their taxes done in their entirety.

m3047 · 4 months ago
The biggest harm today is people in training for "people interaction" specialties with a high degree of empathy / ability to read others: psychology, counseling, forensic interviewing. They pay a lot of money (or it's invested in them) to get trained and then have to do practical residency / internship: they need to do those interships NOW, and a significant proportion of the population that they'd otherwise interact with to do so is off faffing about with AI. The fact that anecdotally they can't seem to create "convincing" enough AIs to take the place of subjects is damning.
datpuz · 4 months ago
Nearly every job, even a lot of creative ones, require a degree of accuracy and consistency that gen AI just can't deliver. Until some major breakthrough is achieved, not many people doing real work are in danger.
ericmcer · 4 months ago
This is a great point, I was just using it understand various DMV procedures. It is invaluable for navigating bureaucracy so if your job is to ingest and regurgitate a bunch of documents and procedures you may be highly at risk here.

That is a great use for it too, rather than replacing artists we have personal advisors who can navigate almost any level of complex bureaucracy instantaneously. My girlfriend hates AI, like rails against it at any opportunity, but after spending a few hours on the DMV website I sat down and fed her questions into Claude and had answers in a few seconds. Instant convert.

mr_toad · 4 months ago
> I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent.

That’s like buying a wrench and changing your own spark plugs. Wrenches are not putting mechanics out of business.

Avicebron · 4 months ago
Depends on how good the wrench is, if I can walk over to the wrench, kick it, say change my spark plugs now you fuck, and it does so instantly and for free and doesn't complain....
freehorse · 4 months ago
> And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.

Sounds like reddit could also do a good job at this, though nobody said "reddit will replace your jobs". Maybe because not as many people actively use reddit as they use generative AI now, but I cannot imagine any other reason than that.

empath75 · 4 months ago
This thought process sort of implies that there's a limited amount of work that's available to do, and once AI is doing all of it, that everyone else will just sit on their hands and stop using their brains to do stuff.

Even if every job that exists today were currently automated _people would find other stuff to do_. There is always going to be more work to do that isn't economical for AIs to do for a variety of reasons.

doctorpangloss · 4 months ago
The kind of person who wants to pay nothing for advice wasn’t going to hire a lawyer or an accountant anyway.

This fact is so simple and yet here we are having arguments about it. To me people are conflating an economic assessment - whose jobs are going to be impacted and how much - with an aspirational one - which of your acquaintances personally could be replaced by an AI, because that would satisfy a beef.

ravenstine · 4 months ago
I don't trust what anyone says in this space because there is so much money to be made (by a fraction of people) if AI lives up to its promise, and money to be made to those who claim that AI is "bullshit".

The only thing I can remotely trust is my own experience. Recently, I decided to have some business cards made, which I haven't done in probably 15 years. A few years ago, I would have either hired someone on Fiverr to design my business card or pay for a premade template. Instead, I told Sora to design me a business card, and it gave me a good design the first time; it even immediately updated it with my Instagram link when I asked it to.

I'm sorry, but I fail to see how AI, as we now know it, doesn't take the wind out of the sails of certain kinds of jobs.

epicureanideal · 4 months ago
But didn't we have business card template programs, and even free suggested business card designs from the companies that sell business cards, almost immediately after they opened for business on the internet?
bamboozled · 4 months ago
Here’s why I don’t think it matters , because the machine is paying for everyone’s productivity boost, even your accountants. So maybe this tide will rise all boats. Time will tell.

Your accountant also is probably saving hundreds of dollars in other areas using AI assistance.

Personally I still think you should cross check with a professional.

throwaway2037 · 4 months ago

    > including a link to the office's well-hidden official calculator of precisely the thing
"[W]ell-hidden" -- what a choice of words! I had a good laugh when I read this. (No trolling!) Can you share a link? HN is so much fun for these great posts.

mmcnl · 4 months ago
But all those jobs have been at risk for a long time already, yet they still exist. There is so much "pre-AI" low-hanging fruit that is still there. So why is that? I don't have the answer, but clearly there is more to it than just technology.
mchusma · 4 months ago
Yeah, 2023 I would expect no effect. 2024 I think generally not, wasn’t good or deployed enough. I think 2025 might be the first signs, it I still think there is a lot of plumbing and working with these things. 2026 though I expect to show an effect.
asadotzler · 4 months ago
That's now what the AI boosters and shills were saying in 2021. Might be worth a refresher if you've got the time, but nearly every timeline that's been floated by any of the leadership in any of the OG LLM makers has been as hallucinated as the worst answers coming from their bots.
Jeff_Brown · 4 months ago
2024 was already madness for translators and graphic artists, according to my personal anecdata.
BeFlatXIII · 4 months ago
> set patterns only mildly tailored for the recipient

If that’s true, probably for the best that those jobs get replaced. Then again, the value may have been in the personal touch (pay to feel good about your decisions) rather than quality of directions.

pclmulqdq · 4 months ago
Ironically, your example is what you used to get from a Google search back when Google wasn't aggressively monetized and enshittified.
timmytokyo · 4 months ago
I think this is an important point. It's easy to lose track of just how bad search has gotten over the years because the enshittification has been so gradual. The replacement of search by AI is capturing so many people's imaginations because it feels like the way google felt when it first came around.

If AI replaces search, I do wonder how long before AI becomes just as enshittified as search did. You know google/OpenAI/etc will eventually need to make this stuff profitable. And subscriptions aren't currently doing that.

jmull · 4 months ago
> ...where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk

I doubt it.

Search already "obsoletes" these fields in the same way AI does. AI isn't really competing against experts here, but against search.

It's also really not clear that AI has an overall advantage over dumb search in this area. AI can provide more focused/tailored results, but it costs more. Keep in mind that AI hasn't been enshittified yet like search. The enshittification is inevitable and will come fast and hard considering the cost of AI. That is, AI responses will be focused and tailored to better monetize you, not better serve you.

jaredcwhite · 4 months ago
> including a link to the office's well-hidden official calculator

So…all you needed was a decent search engine, which in the past would have been Google before it was completely enshittified.

worik · 4 months ago
> So…all you needed was a decent search engine,

Yes.

"...all you need" A good search engine is a big ask. Google at its height was quite good. LLMs are shaping up to be very good search engines

That would be enough, for me to be very pleased with them

andrewmutz · 4 months ago
"AI is all hype and is going to destroy the labor market"
thatjoeoverthr · 4 months ago
My primary worry since the start has been not that it would "replace workers", but that it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous. The concept of "posting" and "applying" to jobs has to go. So any infrastructure supporting it has to go. At no point did it successfully "do a job", but the injury to the signal to noise ratio wipes out the economic value a system.

This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.

JumpCrisscross · 4 months ago
> it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous

"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].

The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.

[1] https://acoup.blog/2025/04/25/collections-how-gandalf-proved...

Garlef · 4 months ago
Interesting: At first I was objecting in my mind ("Clearly, the magic - LLMs - can create effect instead of only revealing it.") but upon further reflecting on this, maybe you're right:

First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.

Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.

(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).

krainboltgreene · 4 months ago
Yeah man, I'm not so sure about that. My father made good money writing resumes in his college years studying for his MFA. Same for my mother. Neither of them were under the illusion that writing/receiving resumes was important or needed. Nor were the workers or managers. The only people who were confused about it were capitalists who needed some way to avoid losing their sanity under the weight of how unnecessary they were in the scheme of things.
bambax · 4 months ago
> This is what happened to Google Search

This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.

weatherlite · 4 months ago
Yeah I agree. But this is a strong perception and why Google stock is quite cheap (people are afraid Search is dying). I think Search has its place for years to come (while it will evolve as well with AI) and that Google is going to be pretty much unbeatable unless it is broken up.
thatjoeoverthr · 4 months ago
I can't buy it, unfortunately, because I've used Google long enough to know what it can be, and currently the primary thing it turns up for me is AI-generated SEO spam. I'll agree, though, that many other search systems are inferior.
Rebuff5007 · 4 months ago
Im not sure this is a great example... yes the infrastructure of posting and applying to jobs has to go, but the cost of recruitment in this world would actually be much higher... you likely need more people and more resources to recruit a single employee.

In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.

Attrecomet · 4 months ago
I'm not sure that is actually a bad thing. Being a competent employee and writing a professional-looking resume are two almost entirely distinct skill sets held together only by "professional-looking" being a rather costly marker of being in the in-group for your profession.
BrtByte · 4 months ago
Resume-sending is a great example: if everyone's blasting out AI-generated applications and companies are using AI to filter them, the whole "application" process collapses into meaningless busywork
Attrecomet · 4 months ago
No, the whole process is revealed to be meaningless busywork. But that step has been taken for a long time, as soon as automated systems and barely qualified hacks were employed to filter applications. I mean, they're trying to solve a hard and real problem, but those solutions are just bad at it.
osigurdson · 4 months ago
input -> ai expand -> ai compress -> input'

Where input' is a distorted version of input. This is the new reality.

We should start to be less impressed volume of text and instead focus on density of information.

blitzar · 4 months ago
> the whole "application" process collapses into meaningless busywork

Always was.

thehappypm · 4 months ago
Are you sure suggesting google search is in decline? The latest Google earnings call suggests it’s still growing
Zanfa · 4 months ago
Google Search is distinct from Google's expansive ad network. Google search is now garbage, but their ads are everywhere are more profitable than ever.
nottorp · 4 months ago
I've all but given up on google search and have Gemini find me the links instead.

Not because the LLM is better, but because the search is close to unusable.

hgomersall · 4 months ago
We're in the phase of yanking hard on the enshittification handle. Of course that increases profits whilst sufficient users can't or won't move, but it devalues the product for users. It's in decline insomuch as it's got notably worse.
InDubioProRubio · 4 months ago
The line goes up, democracy is fine, the future will be good. Disregard reality
benterix · 4 months ago
GenAI is like plastic surgery for people who want to look better - looks good only if you can do it in a way it doesn't show it's plastic surgery.

Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.

noja · 4 months ago
> looks good only if you can do it in a way it doesn't show it's plastic surgery.

I think the plastic surgery users disagree here: it seems like visible plastic surgery has become a look, a status symbol.

BeFlatXIII · 4 months ago
In the specific case of résumé-sending, the decline of the entire sector is a good thing. Nothing but make-work.
weatherlite · 4 months ago
> This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.

Well their Search revenue actually went up last quarter, as all quarters. Overall traffic might be a bit down (they don't release that data so we can't be sure) but not revenue. While I do take tons of queries to LLMs now, the kind of queries Google actually makes a lot of money on (searching flights, restaurants etc) I don't go to an LLM for - either because of habit or because of fear these things are still hallucinating. If Search was starting to die I'd expect to see it in the latest quarter earnings but it isn't happening.

belter · 4 months ago
I had similar thoughts, but then remembered companies still burn billions on Google Ads, sure that humans...and not bots...click them, and thinking that in 2025 most people browse without ad-blockers.
disgruntledphd2 · 4 months ago
Most people do browse without ad blockers, otherwise the entire DR ads industry would have collapsed years ago.

Note also that ad blockers are much less prevalent on mobile.

theshackleford · 4 months ago
People will pay for what works. I consult for a number of ecommerce companies and I assure you they get a return on their spend.
cornholio · 4 months ago
Probably the first significant hit are going to be drivers, delivery men, truckers etc. a demographic of 5 million jobs in US and double that in EU, with ripple effects costing other millions of jobs in industries such as roadside diners and hotels.

The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.

The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.

So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.

Ekaros · 4 months ago
I think that drivers are probably pretty late in cycle. Many environments they operate in are somewhat complicated. Even if you do a lot to make automation possible. Say with garbage move to containers that can simply be lifted either by crane or forks. Still places were those containers are might need lot of individual training to navigate to.

Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.

More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.

And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.

otabdeveloper4 · 4 months ago
Generative AI has failed to automate anything at all so far.

(Racist memes and furry pornography doesn't count.)

pydry · 4 months ago
Given that the world is fast deglobalizing there will be a flood of factory work being reshored in the next 10 years.

There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).

At the same time education costs have been artificially skyrocketed.

Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.

ringeryless · 4 months ago
LLMs are the least deterministic means you could possibly ever have for automation.

What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.

However, CAD/CAM, and infrastructure as code are true amplifiers of human power.

LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.

Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL

ninetyninenine · 4 months ago
Until we solve the hallucination problem google search still has a place of power as something that doesn’t hallucinate.

And even if we solve this problem of hallucination, the ai agents still need a platform to do search.

If I was Google I’d simply cut off public api access to the search engine.

pixl97 · 4 months ago
>google search still has a place of power as something that doesn’t hallucinate.

Google search is fraught with it's own list of problems and crappy results. Acting like it's infallible is certainly an interesting position.

>If I was Google I’d simply cut off public api access to the search engine.

The convicted monopolist Google? Yea, that will go very well for them.

voidspark · 4 months ago
LLMs are already grounding their results in Google searches with citations. They have been doing that for a year already. Optional with all the big models from OpenAI, Google, xAI
lukeschlather · 4 months ago
People talk about LLM hallucinations as if they're a new problem, but content mill blog posts existed 15 years ago, and they read like LLM bullshit back then, and they still exist. Clicking through to Google search results typically results in lower-quality information than just asking Gemini 2.5 pro. (which can give you the same links formatted in a more legible fashion if you need to verify.)

What people call "AI slop" existed before AI and AI where I control the prompt is getting to be better than what you will find on those sorts of websites.

oytis · 4 months ago
What's the alternative here? Apart from well-known, but not so useful useful advice to have a ton of friends who can hire you or be so famous as to not need an introduction.
thatjoeoverthr · 4 months ago
There isn't one. However, every dumb thing in the world is a call to action. Maybe you can show how to do things going forward :)
paulsutter · 4 months ago
Why is this a worry? Sounds wonderful
jspdown · 4 months ago
I'm a bit worried about the social impacts.

When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.

It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.

bilsbie · 4 months ago
Making dumb processes dumber to the point of failure is actually a feature.
lysecret · 4 months ago
Funny you call it value I call it inefficiency.
jll29 · 4 months ago
Read Paul Tetlock's research about so-called "experts" and their inability to make good forecasts.

Here's my own take:

- It is far too early to tell.

- The roll-out of ChatGPT caused a mind-set revolution. People now "get" what is possible already now, and it encourages conceiving and persuing new use cases on what people have seen.

- I would not recommend any kinds to train to become a translator for sure; even before LLMs, people were paid penny amounts per word or line translated, and rates plummeted further due to tools that cache translations in previous versions of documents (SDL TRADOS etc.). The same decline not to be expected for interpreters.

- Graphic designers that live from logo designs and similar works may suffer fewer requests.

- Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.

- LLMs are a basic technology that now will be embedded into various products, from email clients over word processors to workflow tools and chat clients. This will take 2-3 years, and it may reduce the number of people needed in an office with a secretarial/admin/"analyst" type background after that.

- Industry is already working on the next-gen version of smarter tools for medics and lawyers. This is more of a 3-5 year development, but then again some early adopters started already 2-3 years ago. Once this is rolled out, there will be less demand for assitants-type jobs such as paralegals.

meroes · 4 months ago
My dentist already uses something called OverJet(?) that reads X-rays for issues. They seem to trust it and it agreed with what they suspected on the X-rays. Personally, I’ve been misdiagnosed through X-rays by a medical doctor so even being an LLM skeptic, Im slightly favorable to AI in medicine.

But I already trust my dentist. A new dentist deferring to AI is scary, and obviously will happen.

aaronbaugher · 4 months ago
I had a misread X-ray once, and I can see how a machine could be better at spotting patterns than a tired technician, so I'm favorable too. I think I'd like a human to at least take a glance at it, though.

The mistake on mine was caught when a radiologist checked over the work of the weekend X-ray technician who missed a hairline crack. A second look is always good, and having one look be machine and the other human might be the best combo.

NegativeK · 4 months ago
I regularly get to hear veterinarian rants. AI is being forced on them by the corporate owners in multiple fronts. The pattern goes:

Why aren’t you using the AI x-ray? Because it too often misdiagnoses things and I have to spend even more time double checking. And I still have to get a radiologist consult.

Why are you frustrated that we swapped out the blood testing thingamabob with an AI machine? Because it takes 10 minutes to do what took me 30 seconds with a microscope and is STILL not doing the full job, despite bringing this up multiple times.

Why aren’t you relying more on the AI text to speech for medical notes? Because the AVMA said that a doctor has to review all notes. I do, and it makes shit up in literally every instance. So I write my own and label the transcription as AI instead of having to spend even more time correcting it.

The best part is that the majority of vets (at least in this city) didn’t do medical notes for pets. Best you’d often get when asking is a list of costs slapped together in the 48 hours they had to respond. Now, they just use the AI notes without correcting them. We’ve gone from zero notes, so at least the next doctor knows to redo everything they need, to medical notes with very frequent significant technical flaws but potentially zero indication that it’s different from a competent doctor’s notes.

This is the wrong direction, and it’s not just new doctors. It’s doctors who are short on time doing what they can with tools that promised what isn’t being delivered. Or doctors being strong armed into using tools by the PE owners who paid for something without checking to see if it’s a good idea. I honestly do believe that AI will get there, but this is a horrible way to do it. It causes harm.

spondylosaurus · 4 months ago
> Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.

This is such a broad category that I think it's inaccurate to say that all editors will be automated, regardless of your outlook on LLMs in general. Editing and proofreading are pretty distinct roles; the latter is already easily automated, but the former can take on a number of roles more akin to a second writer who steers the first writer in the correct direction. Developmental editors take an active role in helping creatives flesh out a work of fiction, technical editors perform fact-checking and do rewrites for clarity, etc.

sxg · 4 months ago
> Read Paul Tetlock's research about so-called "experts" and their inability to make good forecasts

Do you mean Philip Tetlock? He wrote Superforecasting, which might be what you're referring to?

asadotzler · 4 months ago
We're ten years and a trillion dollars into this. When we were 10 years and $1T into the massive internet builout between 1998 and 2008, that physical network had added over a trillion dollars to the economy and then about a trillion more every year after. How's the nearly ten years of LLM AI stacking up? DO we expect it'll add a trillion dollars a year to the economy in a couple years? I don't. Not even close. it'll still be a net drain industry, deeply in the red. That trillion dollars could have done some much good if spent on something more serious than man-child dreams about creating god computers.
empath75 · 4 months ago
> - Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.

It has been a very, very long time since editors have been proof-reading prose for typos and grammar mistakes, and you don't need LLMs for that. Good editors do a lot more creative work than that, and LLMs are terrible at it.

Deleted Comment

voxl · 4 months ago
Name a better duo: software engineering hype cycles and anti-intellectualism
42lux · 4 months ago
We were the stochastic parrots all along.
nearbuy · 4 months ago
Isn't your post anti-intellectual, since you're denigrating someone without justification just for referencing the work of a professor you disagree with?
nradov · 4 months ago
Video VFX artists are already suffering from lower demand.
akshaybhalotia · 4 months ago
I humbly disagree. I've seen team members and sometimes entire teams being laid off because of AI. It's also not just layoffs, the hiring processes and demand have been affected as well.

As an example, many companies have recently shifted their support to "AI first" models. As a result, even if the team or certain team members haven't been fired, the general trend of hiring for support is pretty much down (anecdotal).

I agree that some automation is better for the humans to do their jobs better, but this isn't one of those. When you're looking for support, something has clearly went wrong. Speaking or typing to an AI which responds with random unrelated articles or "sorry I didn't quite get that" is just evading responsibility in the name of "progress", "development", "modernization", "futuristic", "technology", <insert term of choice>, etc.

johnfn · 4 months ago
How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame? I've seen a number of companies go "AI first" and stop hiring or have layoffs (Salesforce comes to mind) but I suspect they would have been in a slump without AI entirely.
danans · 4 months ago
> How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame?

Both of those can be true, because companies are placing bets that AI will replace a lot of human work (by layoffs and reduced hiring), while also using it in the short term as a reason to cut short term costs.

n_ary · 4 months ago
You are indeed correct according to my opinion. Salesforce went too deep into BlockChain and BigData and probably never recovered from the sunk costs. But to stay relevant they again need to risk another bet but also need to conserve(death by a thousand cuts and all) so they layoff while jumping on AI bandwagon. And how fortunate it is that LLM sellers tout productivity gain(zero backing data but hey) as a benefit, so it also falsely support their failure as a success.
ponector · 4 months ago
AI is not hurting jobs in Denmark they said.

Software development jobs there have bigger threat: outsourcing to cheaper locations.

As well for teachers: it is hard to replace a person supervising kids with a chatbot.

pc86 · 4 months ago
Has any serious person every suggested replacing teachers with chatbots? Seems like a non sequitur.
geraneum · 4 months ago
> I humbly disagree

Both your experience and what the article (research) says can be valid at the same time. That’s how statistics works.

venantius · 4 months ago
It is possible for all of the following to be true: 1. This study is accurate 2. We are early in a major technological shift 3. Companies have allocated massive amounts of capital to this shift that may not represent a good investment 4. Assuming that the above three will remain true going forward is a bad idea

The .com boom and bust is an apt reference point. The technological shift WAS real, and the value to be delivered ultimately WAS delivered…but not in 1999/2000.

It may be we see a massive crash in valuations but AI still ends up the dominant driver of software value over the next 5-10 years.

hangonhn · 4 months ago
That's a repeating pattern with technologies. Most of the early investments don't pay off and the transformation does happen but also quite a bit later than people predicted. This was true of the steam engine, the telegraph, electricity, and the railroad. It actually tends to be the later stage investors who reap most of the reward because by then the lessons have been learned and solutions developed.
asadotzler · 4 months ago
The dot com boom gave us $1T in physical broadband, fiber, and cellular networking that's added many many trillions to the economy since. What's LLM-based AI gonna leave us when its bubble pops? Will that AI infrastructure be outliving its creators and generating trillions for the economy when all the AI companies collapse and are sold off for parts and scrap?
n_ary · 4 months ago
It is hard to tell at this point about the AI/LLM. But the powerful hardware backing it has numerous potential in other research and innovations which are not yet clear to us. Just like the fiber and cellular network made Meta or Airbnb obscenely rich, the hardware may as well facilitate other forms of tech, may be quantum computing or heck even blockchain may be back(surprisingly these are suddenly very legal and less regulated this year!) or who knows what might be new.

While LLM trend is already going to the dumpster(notice how we no longer receive 1b/4b/8b models from Meta since LLama4 and similar competitors but only from china?), I firmly believe that commodity hardware will improve in this race to allow running LLMs(or their next iteration) on regular devices and become as ubiquitous as Siri/Cortana.

venantius · 4 months ago
Among other things the big tech companies are literally planning to build nuclear power plants off this so I think the infrastructure investments will likely be pretty good.
n_ary · 4 months ago
True. But like the dot come bust, ultimate winner may not be LLM or even AI but something else entirely using the left over hardware and infrastructure which was intended for AI.
Sammi · 4 months ago
People overestimate what can be done in the short term and underestimate what can be done in the long term.
DebtDeflation · 4 months ago
The study looks at 11 occupations in Denmark in 2023-24.

Maybe instead look at the US in 2025. EU labor regulations make it much harder to fire employees. And 2023 was mainly a hype year for GenAI. Actual Enterprise adoption (not free vendor pilots) started taking off in the latter half of 2024.

That said, a lot of CEOs seem to have taken the "lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second" approach.

mlnj · 4 months ago
>"lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second"

Case in point: Klarna.

2024: "Klarna is All in on AI, Plans to Slash Workforce in Half" https://www.cxtoday.com/crm/klarna-is-all-in-on-ai-plans-to-...

2025: "Klarna CEO “Tremendously Embarrassed” by Salesforce Fallout and Doubts AI Can Replace It" https://www.salesforceben.com/klarna-ceo-tremendously-embarr...

cogman10 · 4 months ago
2025 US has some really big complicating factors that'd make assessing the job market impact really hard to gauge.

For example, the mass layoffs of federal employees.

m00dy · 4 months ago
Surprisingly, Denmark is one of the easiest countries in which to fire someone.
Sammi · 4 months ago
Workers in denmark are almost all unionised and get unemployment benefits from their union. So it's pretty directly because of the unions that it becomes such a small issue for someone in denmark to be laid off.
WillAdams · 4 months ago
Have any of these economists ever tried to scrape by as an entry-level graphic designer/illustrator?

Apparently not, since the sort of specific work which one used to find for this has all but vanished --- every AI-generated image one sees represents an instance where someone who might have contracted for an image did not (ditto for stock images, but that's a different conversation).

lolinder · 4 months ago
> every AI-generated image one sees represents an instance where someone who might have contracted for an image did not

This is not at all true. Some percentage of AI generated images might have become a contract, but that percentage is vanishingly small.

Most AI generated images you see out there are just shared casually between friends. Another sizable chunk are useless filler in a casual blog post and the author would otherwise have gone without, used public domain images, or illegally copied an image.

A very very small percentage of them are used in a specific subset of SEO posts whose authors actually might have cared enough to get a professional illustrator a few years ago but don't care enough to avoid AI artifacts today. That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.

probably_wrong · 4 months ago
There is more to entry-level illustrators than SEO posts. In my daily life I've witnessed a bakery, an aspiring writer of children's books, and two University departments go for self-made AI pictures instead of hiring an illustrator. Those jobs would have definitely gone to a local illustrator.
TheRealQueequeg · 4 months ago
> That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.

I prefer to get my illegally copied images from only the most humanely trained LLM instead of illegally copying them myself like some neanderthal or, heaven forbid, asking a human to make something. Such a though is revolting; humans breathe so loud and sweat so much and are so icky. Hold on - my wife just texted me. "Hey chat gipity, what is my wife asking about now?" /s

jelder · 4 months ago
I miss the old internet, when every article didn't have a goofy image at the top just for "optimization." With the exception of photography in reporting, it's all a waste of time and bandwidth.

Most if it wasn't bespoke assets created by humans but stock art picked by if lucky, a professional photo editor, but more often the author themselves.

myaccountonhn · 4 months ago
Yeah, I saw a investment app that was filled with obviously AI generated images. One of the more recommended choices in my country.

It feels very short-sighted from the company side because I nope'd right out of there. They didn't make me feel any trust for the company at all.

pj_mukh · 4 months ago
I don't know. Even with these tools, I don't want to be doing this work.

I'd still hire an entry level graphic designer. I would just expect them to use these tools and 2x-5x their output. That's the only changing I'm sensing.

ninetyninenine · 4 months ago
Also pay them less, because they don’t need to be as skilled anymore since ai is covering it.
Dumblydorr · 4 months ago
Probably not, economists generally stay in school straight to becoming professors or they’ll go into finance right after school.

That said I don’t think entry level illustration jobs can be around if software can do their job better than they do. Just like we don’t have a lot of calculators anymore, technological replacement is bound to occur in society, AI or not.

markus_zhang · 4 months ago
AI I different. It impacts everything directly. It's like the computer in boost. It's like trains taking over horses but for every job out there.

Well at least that's the potential.

surement · 4 months ago
> Have any of these economists ever tried to scrape by as an entry-level graphic designer/illustrator?

"Equip yourself with skills that other people are willing to pay for." –Thomas Sowell

pixl97 · 4 months ago
The general thought works good until it doesn't.
mattlondon · 4 months ago
It looks like the writing is on the wall too for other menial and low-value creative jobs too - so basic music and videos - I fully expect that 90+% of video adverts will be entirely AI generated within the next year or two. see Google Veo - they have the tech already and they have YouTube already and they have the ad network already ...

Instead of uploading your video ad you already created, you'll just enter a description or two and the AI will auto-generate the video ads in thousands of iterations to target every demographic.

Google is going to run away with this with their ecosystem - OpenAI etc al can't compete with this sort of thing.

lambdaba · 4 months ago
People will develop an eye for how AI-generated looks and that will make human creativity stand out even more. I'm expecting more creativity and less cookie-cutter content, I think AI generated content is actually the end of it.
nottorp · 4 months ago
> fully expect that 90+% of video adverts will be entirely AI generated within the next year or two

And on the other end we'll have "AI" ad blockers, hopefully. They can watch each other.

fhd2 · 4 months ago
> The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves."

For me, the most interesting takeaway. It's easy to think about a task, break it down into parts, some of which can be automated, and count the savings. But it's more difficult to take into account any secondary consequences from the automation. Sometimes you save nothing because the bottleneck was already something else. Sometimes I guess you end up causing more work down the line by saving a bit of time at an earlier stage.

This can make automation a bit of a tragedy of the commons situation: It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.

chii · 4 months ago
> you end up causing more work down the line by saving a bit of time at an earlier stage

in this case, the total cost would've gone up, and thus, eventually the stakeholder (aka, the person who pays) is going to not want to pay when the "old" way was cheaper/faster/better.

> It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.

not really, as long as the precondition i mentioned above (the total cost dropping) is true.

fhd2 · 4 months ago
That's probably true as long as the workers generally cooperate.

But there's also adversarial situations. Hiring would be one example: Companies use automated CV triaging tools that make it harder to get through to a human, and candidates auto generate CVs and cover letters and even auto apply to increase their chance to get to a human. Everybody would probably be better off if neither side attempted to automate. Yet for the individuals involved, it saves them time, so they do it.

BrtByte · 4 months ago
Short-term gains for individuals can gradually hollow out systems that, ironically, worked better when they were a little messy and human

Dead Comment