I'm less worried about stuff like ChatGPT killing things off, but more about just making everything noticeably a bit worse.
To take an analogy: bad voice recognition software abounds everywhere, not because it is better than what it replaced in terms of UX, but because it works just enough and allows massive cost-savings on hiring people to do customer service jobs.
A world where most marketing copy is written by mediocre AI, and more and more written and visual content are generated by big models that are technically impressive but intellectually hollow is going to be one where the quality of everything sucks just a bit more, but it's so cheap that it becomes pervasive.
(This trend is already apparent and not created by, or limited to, ChatGPT.)
> bad voice recognition software abounds everywhere, not because it is better than what it replaced in terms of UX, but because it works just enough and allows massive cost-savings on hiring people to do customer service jobs.
This one really puzzles me.
Voice recognition replaced phone trees. And from what I can tell, it's just worse. In this particular use case I don't think it really replaced tier 1 support. Either I'm missing something that it's a lot better for some group(s) of people, or people adopted it because of promises that failed to materialize.
Phone trees are deliberately bad, so I assume voice recognition is deliberately worse. The goal is to frustrate nusance customers so that they give up.
Pretty sure that it very noticeably results in lower costs. People often literally cannot figure out how to get through the voice recognition system to reach a human customer support person.
So the companies save money by then cutting the size of their call centers.
This feature has nothing to do with lower costs and everything to do with the management chain in customer service taking a victory lap for adding voice recognition to the phone tree. Lowering cost is certainly how they justify it, and they also just get credit for being modern and keeping their systems up to date with current trends. The actual reality of whether it actually saves anything or has any positive benefit is immeasurable and irrelevant.
Both were pretty bad. Even when I know I want something that should be easy I often cannot find it. Thing like account balance are just to hard to get - and then instead of a balance they finally give me a long set of numbers that include balance, but also the last payment, last 10 charges... Thus making it take too long to get what I really want.
Trading quality for price has been happening everywhere for long enough that it is easy to see how unfortunately it will now play out in the arts as well. I mean, not that long ago all clothes you wore were custom made by a tailor, all the music you listened to was played live by musicians, all stories were brought to life in front of you by theatre actors: now all of those while still available are significantly more expensive and niche than the mechanized (production or reproduction) equivalents.
ChatGPT, stable diffusion and I am sure upcoming music models will enable this mechanization in the arts, where great artists will be unaffected but making it nearly impossible for “good enough” artists to compete while also pushing up the floor of what is an acceptable competency level that merits being paid for making it more difficult for people to support themselves while improving their skills.
Means that many, many more people can have something of passable quality. For example:
> not that long ago all clothes you wore were custom made by a tailor
If you could afford a tailor; otherwise you had to make do with homemade rags that constantly needed mending and looked terrible.
> all the music you listened to was played live by musicians
If you could afford to go to concerts.
> all stories were brought to life in front of you by theatre actors
If you could afford to go to the theatre.
> now all of those while still available are significantly more expensive and niche than the mechanized (production or reproduction) equivalents
Which the vast majority of people can afford, and which significantly improves their quality of life. Now they can buy clothes at Walmart or Target instead of having to wear homemade; sure, not the same as a custom tailored suit, but good enough. Now they can buy digital recordings of world class musicians and theatre actors for much, much less than it would cost to see them live.
> ChatGPT, stable diffusion and I am sure upcoming music models will enable this mechanization in the arts
That already happened decades ago, as soon as mass produced recordings became widely available. It's already next to impossible for any artist who isn't world class (or, more precisely, is not publicized so that people think they're world class) to make a living at their art. ChatGPT and the equivalent in other arts aren't going to affect that much.
Yep. Recording and photography killed almost all the value (social and financial) of middling artistic talent in music, storytelling, and visual arts, which may well have been what gave a lot of people a significant part of their sense of self-worth before that—plus, maybe, some income.
Now AI's coming for most of those who survived that first culling. And not just the middling-talent folks this time.
Also have to consider the kicking of the ladder away. Nobody gets to excellent without passing through the lower stages. Hard to stay motivated if it'll be 5 years just to see if maybe you have something a computer can't offer.
Not impossible, but definitely a raising of the bar.
> now all of those while still available are significantly more expensive and niche than the mechanized (production or reproduction) equivalents.
No, they cost exactly as much as they always had. Before machines, people didn't have wardrobes full of tailor-made clothes. Each person had, give or take one, exactly as many tailor-made clothes as we have today.
I’m not sure tailored clothes have become more expensive at all. You can have a tailored suit for what, a couple of thousand? So 1-2 months of the average wage? That’s peanuts historically
> all the music you listened to was played live by musicians, all stories were brought to life in front of you by theatre actors
well, no, you probably didn't listen to much music or see many stories at all because the average person couldn't afford to experience such things more than a few times in their life. shocking that Luddites are still a thing in the 21st century, especially here.
And in return I can get subtitles on literally everything I want now instead of hoping the creator added them. I have no doubt that some places that were paying for transcription decided to go to AI/ML route but for every 1-2 of those there have have to be thousands of examples of subtitles existing where they never would have before.
I always have YT's subtitles turned on and while they aren't perfect they are way better than the alternative (none).
Network/Streaming TV/Movies should absolutely be paying someone to, at a minimum, clean up the first pass by AI and we should demand that but I'm not at all ready to throw the baby out with the bathwater.
I've worked tangentally to some of the orgs trying to do these types of things. Having a person review everything especially when some titles are only up for 3 to 6 months and they only get a few days notice is really difficult.
Public numbers for prime is around 60,000 titles in 2021. Those are most likely going to be in four languages. And there's going to be at least two versions of each of those depending on what regions are playing in. That also assumes that a title is only one piece of content not a TV show if we assume that 50% of the titles are TV shows and each of those has a minimum of 10 episodes and we assume every title is around an hour including movies so that'll average out with shorter shows longer shows and longer movies. That ends up being around 4.8 million hours worth of content.
Let's just assume that the rate of title entry is one-to-one with the length of content though it's much more likely 1.5 to 1 or 2 to 1 given that people have to pause go back and fix things. That gives us with the average worker working 2,000 hours a year 2400 person years to data enter the entire catalog. Manual entry also obviously leaves rooms for poor workers or fat fingering. So if you wanted really high quality you would spot check other workers and so you might bloat that up to 3,000 people years.
So if you hired a brand new team of unskilled workers train them for 6 months with 3,000 people and then spent them for a year you would be able to backfill all of Prime's catalog right now.
But what happens when you on board say 10,000 titles from getting a new licensing deal with searchlight films?
You want that content up as fast as possible and people actually like it less if it doesn't have titles on it then if they're bad as some other comments have said.
Also just running the numbers let's say you pay someone around $30,000 a year to do this data entry at a very low wage double that for facilities and support HR and all that crap. 3000 employees at that wage for a year is 180 million. That training alone is 90M.
Should each of the streaming services take a large chunk of their budget just to make sure that a human reviews the subtitles possibly at an accuracy rate only slightly higher than the machine learning can do currently. Would you rather have 5% human coverage or 100% 90% accurate AI coverage.
From my understanding most content doesn't actually have subtitles on it unless it was on a premiere TV network that was required for government regulations to subtitle their shows such as a BBC TV show. That means that the streaming networks are actually doing this out of customer interest as opposed to being required to do so so they're actually backfilling work for the people that produce the videos in the first place. And from the little bit that I've worked in broadcast subtitles sharing isn't all that standard and Netflix for instance may have added subtitles but that doesn't mean that prime video or Hulu is going to get those subtitles if they take on that content later the video producers aren't that interested in pulling the information back into their catalog they don't have the tech support to do stuff like that.
Also almost all of this was dictated via Google voice AKA subtitles via ai and the only mistake I noticed was it didn't understand tangently it instead put 10 generally.
you definitely see this in the translation industry. neural net machine translation misses enough nuance that it can't replace a human translator where it really matters, but it 'looks right' enough to convince clients that it can.
as a result, it's a lot harder to make a living as a freelance translator these days - there are fewer jobs and what jobs exist are often proofreading machine translations, which command lower rates because 'the machine did most of the work already,' even though they often require full rewrites.
at the same time, human translation quality has gone down too, since a lot of people will pass off machine translation as their own work, and when rates are too low, you can't spend too much time on any particular job.
I have the same worry. Another example: Is furniture nowadays more beautiful and durable than it used to be 100 years ago? Is there more variety now? Not in my book. Same goes for a lot of products that are now produced in the cheapest way possible at scale.
Looks very much like a race to the bottom dominated by a few big players to me. Consumers don't seem to care that much.
There's absolutely beautiful and durable furniture available, it's just a lot more expensive than Ikea. There's a local place that makes furniture on-demand; as far as I can tell, the quality is great, but we're talking $8–10k for a dining table. That segment of the market still exists and I don't see it going anywhere.
Cheap, mass-produced furniture has taken the bottom of the market but there really is a wide variety available today.
Your example is true only if you include the criteria that it be affordable. You can probably get any furniture you want made locally or by a highly skilled person far away who is willing to ship to you.
100 years ago, how much did the family dinner table cost? Would you measure it in hours or days of salary? When that family in 1923 went table shopping, they were either going to buy from a company like Sears Roebuck or from a local store and there probably wasn't much variety from either source.
Today, there's lots of cheap stuff available, but there's also lots of high end custom stuff available as well. We have more of everything.
Does it matter? There is a lot of 100 year old furniture out there that nobody wants. They don't fit modern lifestyles.
100 year houses are only livable because someone put a ton of money into retrofitting things like plumbing, electric, and HVAC. Most of that has had major rework done several times as well since it was first added. Even at that, the fundamentals of those houses mean that they cannot be retrofitting for good insulation, and it can be argued they should all be scrapped for that reason.
Consumers don't have much of a choice. Wages have stagnated for decades, so their incomes' purchasing power has decreased over time. It's often a choice between buying what's affordable or not buying it at all.
1. Creativity can be achieved at faster speeds than humans can consume.
2. We have no evidence to say AI and the techniques behind it wont keep improving (there are already so many low hanging fruits, alot of it is infra problems and the other half that we can't even imagine can probably be solved by AI better than humans can)
> allows massive cost-savings on hiring people to do customer service
Does it, really? I find that most of the time that while people can spot the difference between facts and a sales pitch, repeating the pitch enough still changes the common discourse.
People can say things like "technology x may be more expensive but saves development time" (where X can be anything from the latest frontend framework to microservices to voice recognition) while in fact there is no data that it saves any development time at all.
Is it even true that voice recognition reduces the amount of customer service compared to a simple menu system? If it isn't, the premise doesn't hold.
There was a similar dynamic with translation market and Google Translate. It didn't matter that human translation was superior. What mattered was that Google Translate (or similar) brought down prices so much that it effectively destroyed the market for everyday-type of translations. Why? Because customers said that, well, just use Google Translate and then improve it a bit.
> To take an analogy: bad voice recognition software abounds everywhere, not because it is better than what it replaced in terms of UX, but because it works just enough and allows massive cost-savings on hiring people to do customer service jobs.
I had to go through hoops talking to the Doordash bot to get a change to my order. And I pray for the poor soul that runs afoul of Google…
Also, how do you continue to train a model like this when everywhere you look is just output from the same model? Like, at some point, the real conversations and text gets overwhelmed by the fake stuff. Where does the model pull more training data from?
So you can just feed it the good info, get it to update things with more context and it will spit out pretty decent prose (though you have to ask to for it to be well written).
ChatGPT is basically automated reversion to a slightly worse mean for applicable areas. The algos as sophisticated as they are produce a credible remix of the training input.
But just give it context about what you want it to write and it will write about it. Obviously this is much easier if you have an existing body of work but you can still write to ChatGPT clumsily and it will often turn what you are trying to say into better prose.
My analogy is that it is to writing what the spell checker is to spelling. Very few people in the world have to be good at spelling anymore. Yeah everyone is okay, but it's not as valued a skill as it used to be. ChatGPT is doing the same for writing. Yes you need to be able to write to an okay level but you don't need to be able to write well, ChatGPT does that.
Haha, its true but I feel like GP's thought stops way short of Heidegger's. Products we buy being a little worse because of the necessities of capitalism (or whatever you'd rather say) is small change compared to the very essence of Being getting enframed into total occlusion.
That is, I don't know if I would say getting frustrated by an automated voice system is the same thing as a hammer breaking for your given Dasein, but one could make the argument I bet.
> more about just making everything noticeably a bit worse.
Hit the nail on the head. ChatGPT will replace the wrong things, and probably create a world where we still need to solve problems ChatGPT has become attached to.
It's all noise, I can't perceive a signal at this point.
It's just good at writing, it's contextual understanding is bad (at first).
But give it context and it flies. Ask it to write a cover letter for an engineering role, it will turn out mediocre crap.
Give it the actual role and your CV in the prompts, you're pretty much good to go. It's unique, you still control the narrative (please highlight my ability to work under pressure) and it's done in 10 secs.
I really don't see what isn't to love about that.
Also it usually writes better if you tell to write well.
I agree with the article but maybe you are trying to do the wrong thing with ChatGPT. I am not a native English speaker (as many of you already noticed due to my grammar mistakes), so ChatGPT have been very useful to rewrite my texts, fix grammar and maybe "beautify" them a bit. It's has been an invaluable tool in that sense. I feel much more confident now sending emails and writing important messages knowing that the grammar and tone have been "approved" by this IA.
By the way, I asked ChatGPT to fix/improve this previous comment and this is the result:
I agree with the article, but perhaps you are using ChatGPT in the wrong way. As a non-native English speaker (as many of you have likely noticed due to my grammar mistakes), ChatGPT has been extremely helpful in rewriting my texts, correcting grammar, and making them more polished. It has been an invaluable tool in that sense. I now feel more confident when sending emails and writing important messages, knowing that the grammar and tone have been reviewed and approved by this AI.
As a native English speaker, I'd be quite wary of doing that -- I thought your original message was perfectly clear (a few errors like "It's has been", but nothing that impeded my ability to understand -- in fact, I had to closely re-read your 1st message to see the difference, because my brain skipped over things like "it's has been").
The "ChatGPT improved" result is certainly a little more polished, but given ChatGPT's ability to confidently misinterpret / hallucinate, I would be worried that it could subtley change meanings, especially for more technical content.
Of course this problem isn't just limited to AIs, I've known (native speaker) Project Managers and client liaisons who have tweaked text I have written to ask technical questions and in doing, totally changed the meaning (and then done the same in the opposite direction, so the response was especially bewildering!)
I can understand that it's improving your confidence (and by the look of your 1st paragraph your English is good enough that you could easily identify if it changed the meaning in your message -- and that combined with a confidence boost is maybe worth the extra proof-reading you need to do) but that wariness of risk is just my $0.02.
I helped a guy who wasn't good at english send emails in a previous job. It was conflicting, because with automation we're removing trials from our lives that convey how much mastery we have in given areas.
One of the heaviest consequences of AI proliferation is that the value of understanding will continue to plummet, while the value of asking for help and delegating will rise.
This is all fine as long as there are plenty of willing subordinates, man or machine, to do the dirty work of actually knowing things for you, but what happens when they wise up to the fact that they're getting the short end of the stick?
I understand that you would prefer to sound like a native speaker. This is a perspective I've heard from others over the last few weeks too, so I've been considering it. Your writing is fine; the original version is perfectly understandable on its own.
The thing is... the only time that grammatical mistakes actually matter is when they make meaning ambiguous. And since ChatGPT doesn't know what you meant, I would be more worried that it would further distort any mistakes in that area.
I personally like your original paragraph because it may not be perfect it has flavor. If everyone started using AI to fix their paragraph then text will become boring because it would all be of similar style. It would be like reading one long continuous book written by the same author.
Imagine if restaurants all started using AI to cook their food. It would destroy the taste of food because no matter where you go the food will always taste the same. The great thing about restaurants is you could go to two different pizza places on the same street and order the same type of pizza and they would both taste different. Now imagine if AI got a hold of the recipe and made the pizza at both places. They would turn out same and taste the same.
AI has lots of flaws, but that's not inherently one of them, and in the context of ChatGPT it explicitly is not one of them. It samples from a probability distribution, and with appropriate prompts it's actually fantastic at giving you a new recipe every night, or a dozen variations on a recipe theme, and you can tailor it to exactly the level of novelty you're expecting. Randomness by itself is not the human characteristic that the current batch of AI is lacking.
It fixed some things (“ChatGPT have”, “It’s has”) but changed the meaning of the first sentence and introduced a misplaced modifier (“As a non-native English speaker, ChatGPT…”).
I'm not that impressed with the ChatGPT output. I see one or two one routine grammar errors ChatGPT fixed for you, but overall its version reads worse than your original. (A regular grammar checker could catch these mistakes just as well.)
For example, it took your perfectly good:
>I am not a native English speaker (as many of you already noticed due to my grammar mistakes), ...
And replaced it with
> As a non-native English speaker (as many of you have likely noticed due to my grammar mistakes), ...
While a lot of native speakers would write something like this, it's awkward and incorrect. You didn't mean to say "As you may have noticed, as a non native speaker, blah blah", you meant to say "As you may have noticed, I am not a native speaker."
Apropos automated approval of "tone". The company I work for now has a policy that documents must not use any "potentially biased" terms (e.g. "whitelist" or "master"). We now have automated agents that do simple search-and-replace (e.g. s/whitelist/allowlist/) over our internal Wiki pages. With no intention of debating the value or politics of such a policy, I wonder whether anyone is contemplating employing ChatGPT for more pervasive automated tone-control.
Notably, ChatGPT changed the meaning of what you said.
“Maybe you are trying to do the wrong thing with ChatGPT” vs “Perhaps you are using ChatGPT in the wrong way” are subtly but substantially different statements.
One implies the incorrect usage of the software, the other implies the software being used for the incorrect things.
I actually prefer the original paragraph. It reads like you wrote it, you being you the person. The second feels entirely sterile and generic.
I am not worried about whether chatgpt would hallucinate as I’m sure you read the responses and I know it’s easier for me to read and listen in a foreign language than compose.
By the way, I think you’re doing an amazing thing here taking the technology and making a profoundly useful tool out of it. By reading its revision you can get corrections and pointers. I use chatgpt similarly but in other domains. It’s not always “right,” but it’s close enough to point me in directions I wasn’t aware of before.
You writing skills will stop improving the more you depend on this. It's similar to people losing the ability to mentally map where they are or learn their own city by relying on map and routing apps. While map apps have made navigation so much easier and getting lost much less frequent, it does come at some cost, perhaps acceptable. Is it acceptable to you that your English growth stalls, or if you internalize the ChatGPT changes, that you start to talk like a generic bot, albeit seemingly intelligent bot?
It’s already getting that way with things like grammarly. It makes me sad. I know my colleagues who speak English as an additional language use it and find it helpful, but I much prefer whatever English they speak with be the same when they write. It makes engagement more interesting
I seem to remember a book where people starting using AI's like this to automatically improve their email/texts, eventually allowing it to do more (basic responses/appointment scheduling/etc), making communication a lot easier and more efficient.
At some point, the boundary between simply "polishing things up" and actively guiding the interactions became blurry, and the AI eventually becomes the go between between everyone and starts taking over.
We know this response has not been "enhanced" by the AI if my second and third sentences have not been filtered out.
"Significant" improvement? Really? They're almost the same but ChatGPT version removed some of the nuance such as `"approved"` vs `reviewed and approved`. I prefer the human-written version even with the grammar errors.
I feel like some of the nuance got lost in at least two places:
"trying to do the wrong thing with ChatGPT" vs "using ChatGPT in the wrong way"
"have been "approved" by this AI" vs "have been reviewed and approved by this AI."
Most of it is an improvement, but the change in the first sentence could really change the meaning in a way that may or may not be appropriate. That change is ambiguous but could be important.
Proofreading shouldn't change the meaning or intention of a passage.
Okay, I guess I'm in the other camp. I really did not like the ChatGPT version, and had no problem with the original.
The original was perfectly clear, and had your own unique "voice" to it. The ChatGPT version sounded like every other piece of ChatGPT, and honestly, I've become somewhat allergic to them now.
I prefer your original text, and I'd caution you against using AI this way. Embrace your writing style, make an effort to improve it but accept that it is a part of you.
The chatgpt tone is weird, be careful. AI mistakes are different than non native mistakes, and we are all going to be getting a lot more sensitive to being able to detect robots talking to us.
I think the answers being about ChatGPT subtly changing the meaning, changing the tone are missing the point and ignoring the agency the author has. We usually are better at curating than writing, so I feel pretty confident the author went over the ChatGPT version and (as is their prerogative, and their's only) decided that the ChatGPT version actually reflected their intent and their style better.
I've been using ChatGPT (as a non-native speaker) as well and found it tremendously useful—not only to catch grammar mistakes or provide some better vocabulary, but to understand what might have been off and provide more alternatives. I often ask it to rephrase my sentences in different styles and then pick and choose what I like the best ("in the style of the new yorker", "in the style of ayn rand", "in the style of ezra klein", "in the style of CNN").
Anybody that cares enough to have ChatGPT edit their sentences and cares about what they are intent to express, will I think benefit tremendously from such a tool.
In fact, it is what ChatGPT is really good at: a search engine for vibes, tailored to what you feed it. It is I think a much richer, much more enticing tool that following some rules out of Strunk & White or some other bland business writing handbook.
It’s gonna kill the internet as a place to find useful information. Google is already almost unusable for researching any topic that’s even adjacent to anything that can be monetized. Widespread, good-enough, AI-generated, SEO’d “articles” will push it over the edge.
TBH, I'm wondering if we need to start using yahoo style curated content directories again. Content aggregators like reddit, hacker news and lobsters already help filter out a lot of noise, but they focus on recent articles. It would be nice to have a searchable/browsable source for interesting content. It wouldn't contain the full internet, but at least it would have better relevancy.
ChatGPT prompt: Devise an economic policy that would lead to the elimination of the advertising industry. Include a list of wealthy persons and companies who would benefit from it so that I may solicit bribes... err, "political donations" from them to support my 2024 presidential campaign.
chatgpt isn't a search engine, ask it questions about a topic you know about and you'll quickly figure out it's very good at shaping BS into something believable
I don't think so. The internet is already full of shit sites, you can automate spam very easily in any quantity.
But chatGPT would actually write useful texts, most of them more useful than the average blog post. Over a few years they might be just as good as the best human articles.
Normal people could write a draft version and have chatGPT reword it, that would control the contents of the article. But I expect spammers would just clone other articles, they don't have the time.
One thing AI could do is to scale up validation. Search every topic and note the answers - do they concur? do they disagree? what is the distribution. Maybe there is no answer?
This would be a reference for AI to stop hallucinating and making factoid mistakes. The model should know when it doesn't know, or when it is stepping on a landmine by forgetting to mention something.
In the end I think the internet is going to be full of spam, AI generated or not. And we will need to use more AI to extract the signal from the noise. This time it should be local AI under user control.
"But chatGPT would actually write useful texts, most of them more useful than the average blog post. Over a few years they might be just as good as the best human articles."
If web sites start being written by A.I. and ChatGPT starts getting trained on A.I. output, it's going to start spewing garbage results.
It's only useful if trained on human input, A.I. have no model of the world and no A.I. can function if trained by A.I. written texts.
Dunno, may just be the opposite that happens in the long term. The cheaper it becomes to produce content spam the further the traffic becomes diluted and the smaller each individual spammers' profit margins become. The very same discovery issues that plague human-created content is also in the end harmful to spam content.
Google is overall a pretty bad benchmark IMO. There's a lot of quality content that they seem to struggle to find. Not that it doesn't exist, you just won't find it for a host of complex reasons.
Throughout my career I've encountered a good number of people who were adding very little (to none or in a few cases, negative) value to their workplace.
It used to bother me immensely. On behalf of the employer and for their own sake. I can still find that sentiment if I look for it but today, my perspective is that at least in many cases, it's ok. If the company does well and the employee doesn't hate their life because of unfulfilling work, we're all better off than if they went unemployed.
Yes, ideally everyone should follow their passion and pursue the life of their dreams but some people don't think of work as the source of that. However, that doesn't mean that they would rather not work at all -- and I don't mean just because of the loss of income. I know that UBI is all the rage in here and I think it's an interesting thought experiment but I just don't see a world where only the 1% of performers in any field are required and able to work is a promising thing.
> Yes, ideally everyone should follow their passion and pursue the life of their dreams but some people don't think of work as the source of that.
Not only do some people not think of it that way, I doubt enough necessary jobs are the dream jobs of enough people for it to work out. I bet we need the vast majority of people to be working in jobs that aren't any part of any dream of theirs.
Besides, maybe their dream wouldn't pay, so they fund it with their bullshit day job. Then it are pursuing their dreams. Hell, maybe that dream's just "raise and provide for a family"!
> I know that UBI is all the rage in here and I think it's an interesting thought experiment but I just don't see a world where only the 1% of performers in any field are required and able to work is a promising thing.
I entirely do not follow how this would be the result of UBI.
Focusing in on the UBI section of your comment: I feel that there is always something being glossed over with the assumption that large amounts of people would stop working if we introduced UBI. I don't think this is true in the general sense (people would find other ways to work), but probably true that most people would quit their job in the short term.
Just to rant about that a bit, we acknowledge and ignore that a large proportion of people barely scraping by would probably not do what they were doing if not for the threat of homelessness and starvation. It's true that the way that things are built would have to change with UBI because most of what we benefit from in the current system is based on the exploitation of the most desperate. We should note though that a large proportion of the benefit of that exploitation is not going to bettering society, but into the pockets of the most wealthy.
At least with UBI we would remove exploitation of one's living situation as a tool to extract labour, which removes a lot of power from the existing monopolies.
My main concern isn't whether ChatGPT will kill off anything, like journalism. Instead, my concern is that this will create a Internet-wide echo chamber.
I've seen what happens when an individual human enter a delusional feedback loop and diverge from reality, that pivotal moment when sensemaking becomes self-referential. They get a psychotic break. I think we're going to get front-row seats at watching a whole, planet-spanning civilization enter a psychotic break.
Arguably, we are already there. It's just that, I think deploying ChatGPT will greatly accelerate this. This won't just be fringe or extreme subcultures, but rather, the mainstream cultures, as our sensemaking turns inwards.
I think the internet today has two modes, at least. One is commoditized content and the other is collaborative content. The commoditized content has long since been either automated or at least highly ripped off, and with chatgpt style AI it’ll probably improve dramatically to the point of being generally valuable and useful, even if it seems to offend people that a machine did something they did before (is this the White collar John Henry moment?). The collaborative stuff will continue to be there as it is because we genuinely like talking to other hairless monkeys. They will probably take “no AI” rules of discourse, along with their currently rules of stuff like “no soliciting or selling,” “no flaming,” “no jerks,” etc.
AI/LLMs cannot "do work that is worth doing" in this context because the work that is worth doing involves human learning/education, not simply the production of content.
If you meant, "when can AI perform all writing tasks such that it would be a waste of time for human beings to develop the ability to express themselves in writing", well, fine question, likely answered far too soon.
If history is any indicator, the last group of middle class professionals who decided to take this question into their own hands ended up getting executed by the state or exiled to penal colonies[1].
The rest of them, and their families, ended up dying in utter destitution. That's what "we" will probably do again.
This has been a problem with every new productive technology. The civilized answer is "society takes on the responsibility of teaching them to do other things".
Of course, this often does not happen for one of two reasons. First, society doesn't always choose the civilized answer; it often chooses to just marginalize the folks who just lost their jobs. Second, society often really wants to pick the thing they're teaching these people to do -- this is how you get people trying to teach coal miners how to code, despite coal miners being by and large uninterested in coding.
These are pretty real problems that we're already grappling with, but we're going to need to get very serious about it very soon. LLMs aren't the only reason why.
> So what do we do with all the middle class human bs generators today
Funny that bullshitting was the easiest job to automate to perfection. But now everyone can spin up their AI bullshit operation. Anti-bullshit AI would be very valuable in this situation.
It’s kind of like that South Park special about the “streaming wars” and the water parks.
A swimming pool can have a certain amount of urine in it without anyone noticing, and that’s the way it’s always been. It’s not a problem as long as the amount is small and you add cleaning agents. The problem is that filling the web with content created by ChatGPT means adding more and more urine to the pool until someone notices.
It will be fine, for a long while, until it suddenly reaches a threshold where it’s noticeable, when all of the highly ranked hits you get for a question are subtly incorrect to the point that you can’t trust any of them, when more and more Wikipedia contributors are discovered to just copy-paste text from an AI, and then it’s no longer fine, but it’s too late because you’re still in the Internet swimming pool and now you’re swimming in piss.
To be fair, basically anything controversial on the internet is already, at minimum, subtly incorrect. Any given person's perspective on basically anything likely has at least one inaccuracy, or one mischaracterization, or one heavily biased take. I already can't trust virtually anything on the internet unless I really know someone, their credentials, their background, their biases, and also spend a lot of researching the topic myself.
Bullshit. This speaks less to the reality of the situation and more to the author's biases/myopia. Remember when Craigslist was just a quirky little website? Cheerleaders at the time dismissed the notion it threatened anything of value, then it basically killed the newspaper industry, which in turn bricked boots-on-the-ground journalism. Anyone feel like claiming nothing of value was lost there? We're talking about a tool that has the potential to eliminate the trust metric for all online content.
Plus he's talking about This Version of ChatGPT. And it's not just ChatGPT that's happening - we are getting image generation, voice generation, code generation...
To take an analogy: bad voice recognition software abounds everywhere, not because it is better than what it replaced in terms of UX, but because it works just enough and allows massive cost-savings on hiring people to do customer service jobs.
A world where most marketing copy is written by mediocre AI, and more and more written and visual content are generated by big models that are technically impressive but intellectually hollow is going to be one where the quality of everything sucks just a bit more, but it's so cheap that it becomes pervasive.
(This trend is already apparent and not created by, or limited to, ChatGPT.)
This one really puzzles me.
Voice recognition replaced phone trees. And from what I can tell, it's just worse. In this particular use case I don't think it really replaced tier 1 support. Either I'm missing something that it's a lot better for some group(s) of people, or people adopted it because of promises that failed to materialize.
So the companies save money by then cutting the size of their call centers.
ChatGPT, stable diffusion and I am sure upcoming music models will enable this mechanization in the arts, where great artists will be unaffected but making it nearly impossible for “good enough” artists to compete while also pushing up the floor of what is an acceptable competency level that merits being paid for making it more difficult for people to support themselves while improving their skills.
Means that many, many more people can have something of passable quality. For example:
> not that long ago all clothes you wore were custom made by a tailor
If you could afford a tailor; otherwise you had to make do with homemade rags that constantly needed mending and looked terrible.
> all the music you listened to was played live by musicians
If you could afford to go to concerts.
> all stories were brought to life in front of you by theatre actors
If you could afford to go to the theatre.
> now all of those while still available are significantly more expensive and niche than the mechanized (production or reproduction) equivalents
Which the vast majority of people can afford, and which significantly improves their quality of life. Now they can buy clothes at Walmart or Target instead of having to wear homemade; sure, not the same as a custom tailored suit, but good enough. Now they can buy digital recordings of world class musicians and theatre actors for much, much less than it would cost to see them live.
> ChatGPT, stable diffusion and I am sure upcoming music models will enable this mechanization in the arts
That already happened decades ago, as soon as mass produced recordings became widely available. It's already next to impossible for any artist who isn't world class (or, more precisely, is not publicized so that people think they're world class) to make a living at their art. ChatGPT and the equivalent in other arts aren't going to affect that much.
Now AI's coming for most of those who survived that first culling. And not just the middling-talent folks this time.
Not impossible, but definitely a raising of the bar.
No, they cost exactly as much as they always had. Before machines, people didn't have wardrobes full of tailor-made clothes. Each person had, give or take one, exactly as many tailor-made clothes as we have today.
well, no, you probably didn't listen to much music or see many stories at all because the average person couldn't afford to experience such things more than a few times in their life. shocking that Luddites are still a thing in the 21st century, especially here.
Drives me nuts as someone who likes using subtitles.
I always have YT's subtitles turned on and while they aren't perfect they are way better than the alternative (none).
Network/Streaming TV/Movies should absolutely be paying someone to, at a minimum, clean up the first pass by AI and we should demand that but I'm not at all ready to throw the baby out with the bathwater.
Deleted Comment
Public numbers for prime is around 60,000 titles in 2021. Those are most likely going to be in four languages. And there's going to be at least two versions of each of those depending on what regions are playing in. That also assumes that a title is only one piece of content not a TV show if we assume that 50% of the titles are TV shows and each of those has a minimum of 10 episodes and we assume every title is around an hour including movies so that'll average out with shorter shows longer shows and longer movies. That ends up being around 4.8 million hours worth of content.
Let's just assume that the rate of title entry is one-to-one with the length of content though it's much more likely 1.5 to 1 or 2 to 1 given that people have to pause go back and fix things. That gives us with the average worker working 2,000 hours a year 2400 person years to data enter the entire catalog. Manual entry also obviously leaves rooms for poor workers or fat fingering. So if you wanted really high quality you would spot check other workers and so you might bloat that up to 3,000 people years.
So if you hired a brand new team of unskilled workers train them for 6 months with 3,000 people and then spent them for a year you would be able to backfill all of Prime's catalog right now.
But what happens when you on board say 10,000 titles from getting a new licensing deal with searchlight films?
You want that content up as fast as possible and people actually like it less if it doesn't have titles on it then if they're bad as some other comments have said.
Also just running the numbers let's say you pay someone around $30,000 a year to do this data entry at a very low wage double that for facilities and support HR and all that crap. 3000 employees at that wage for a year is 180 million. That training alone is 90M.
Should each of the streaming services take a large chunk of their budget just to make sure that a human reviews the subtitles possibly at an accuracy rate only slightly higher than the machine learning can do currently. Would you rather have 5% human coverage or 100% 90% accurate AI coverage.
From my understanding most content doesn't actually have subtitles on it unless it was on a premiere TV network that was required for government regulations to subtitle their shows such as a BBC TV show. That means that the streaming networks are actually doing this out of customer interest as opposed to being required to do so so they're actually backfilling work for the people that produce the videos in the first place. And from the little bit that I've worked in broadcast subtitles sharing isn't all that standard and Netflix for instance may have added subtitles but that doesn't mean that prime video or Hulu is going to get those subtitles if they take on that content later the video producers aren't that interested in pulling the information back into their catalog they don't have the tech support to do stuff like that.
Also almost all of this was dictated via Google voice AKA subtitles via ai and the only mistake I noticed was it didn't understand tangently it instead put 10 generally.
as a result, it's a lot harder to make a living as a freelance translator these days - there are fewer jobs and what jobs exist are often proofreading machine translations, which command lower rates because 'the machine did most of the work already,' even though they often require full rewrites.
at the same time, human translation quality has gone down too, since a lot of people will pass off machine translation as their own work, and when rates are too low, you can't spend too much time on any particular job.
Looks very much like a race to the bottom dominated by a few big players to me. Consumers don't seem to care that much.
Cheap, mass-produced furniture has taken the bottom of the market but there really is a wide variety available today.
100 years ago, how much did the family dinner table cost? Would you measure it in hours or days of salary? When that family in 1923 went table shopping, they were either going to buy from a company like Sears Roebuck or from a local store and there probably wasn't much variety from either source.
Today, there's lots of cheap stuff available, but there's also lots of high end custom stuff available as well. We have more of everything.
100 year houses are only livable because someone put a ton of money into retrofitting things like plumbing, electric, and HVAC. Most of that has had major rework done several times as well since it was first added. Even at that, the fundamentals of those houses mean that they cannot be retrofitting for good insulation, and it can be argued they should all be scrapped for that reason.
Deleted Comment
1. Creativity can be achieved at faster speeds than humans can consume. 2. We have no evidence to say AI and the techniques behind it wont keep improving (there are already so many low hanging fruits, alot of it is infra problems and the other half that we can't even imagine can probably be solved by AI better than humans can)
Does it, really? I find that most of the time that while people can spot the difference between facts and a sales pitch, repeating the pitch enough still changes the common discourse.
People can say things like "technology x may be more expensive but saves development time" (where X can be anything from the latest frontend framework to microservices to voice recognition) while in fact there is no data that it saves any development time at all.
Is it even true that voice recognition reduces the amount of customer service compared to a simple menu system? If it isn't, the premise doesn't hold.
There was a similar dynamic with translation market and Google Translate. It didn't matter that human translation was superior. What mattered was that Google Translate (or similar) brought down prices so much that it effectively destroyed the market for everyday-type of translations. Why? Because customers said that, well, just use Google Translate and then improve it a bit.
I had to go through hoops talking to the Doordash bot to get a change to my order. And I pray for the poor soul that runs afoul of Google…
It's been like a 1.5 months since it was released.
For me I am doing a lot my writing with it becuase my writing is worse than mediocre.
As long as you don't expect it to know things off the bat, it has a decent memory, though seems to be unaware of what it knows e.g. https://news.ycombinator.com/item?id=34370057)
So you can just feed it the good info, get it to update things with more context and it will spit out pretty decent prose (though you have to ask to for it to be well written).
My analogy is that it is to writing what the spell checker is to spelling. Very few people in the world have to be good at spelling anymore. Yeah everyone is okay, but it's not as valued a skill as it used to be. ChatGPT is doing the same for writing. Yes you need to be able to write to an okay level but you don't need to be able to write well, ChatGPT does that.
That is, I don't know if I would say getting frustrated by an automated voice system is the same thing as a hammer breaking for your given Dasein, but one could make the argument I bet.
Deleted Comment
Deleted Comment
Dead Comment
Hit the nail on the head. ChatGPT will replace the wrong things, and probably create a world where we still need to solve problems ChatGPT has become attached to.
It's all noise, I can't perceive a signal at this point.
But give it context and it flies. Ask it to write a cover letter for an engineering role, it will turn out mediocre crap.
Give it the actual role and your CV in the prompts, you're pretty much good to go. It's unique, you still control the narrative (please highlight my ability to work under pressure) and it's done in 10 secs.
I really don't see what isn't to love about that.
Also it usually writes better if you tell to write well.
Dead Comment
By the way, I asked ChatGPT to fix/improve this previous comment and this is the result:
I agree with the article, but perhaps you are using ChatGPT in the wrong way. As a non-native English speaker (as many of you have likely noticed due to my grammar mistakes), ChatGPT has been extremely helpful in rewriting my texts, correcting grammar, and making them more polished. It has been an invaluable tool in that sense. I now feel more confident when sending emails and writing important messages, knowing that the grammar and tone have been reviewed and approved by this AI.
The "ChatGPT improved" result is certainly a little more polished, but given ChatGPT's ability to confidently misinterpret / hallucinate, I would be worried that it could subtley change meanings, especially for more technical content.
Of course this problem isn't just limited to AIs, I've known (native speaker) Project Managers and client liaisons who have tweaked text I have written to ask technical questions and in doing, totally changed the meaning (and then done the same in the opposite direction, so the response was especially bewildering!)
I can understand that it's improving your confidence (and by the look of your 1st paragraph your English is good enough that you could easily identify if it changed the meaning in your message -- and that combined with a confidence boost is maybe worth the extra proof-reading you need to do) but that wariness of risk is just my $0.02.
One of the heaviest consequences of AI proliferation is that the value of understanding will continue to plummet, while the value of asking for help and delegating will rise.
This is all fine as long as there are plenty of willing subordinates, man or machine, to do the dirty work of actually knowing things for you, but what happens when they wise up to the fact that they're getting the short end of the stick?
The thing is... the only time that grammatical mistakes actually matter is when they make meaning ambiguous. And since ChatGPT doesn't know what you meant, I would be more worried that it would further distort any mistakes in that area.
Imagine if restaurants all started using AI to cook their food. It would destroy the taste of food because no matter where you go the food will always taste the same. The great thing about restaurants is you could go to two different pizza places on the same street and order the same type of pizza and they would both taste different. Now imagine if AI got a hold of the recipe and made the pizza at both places. They would turn out same and taste the same.
How is that better?
For example, it took your perfectly good:
>I am not a native English speaker (as many of you already noticed due to my grammar mistakes), ...
And replaced it with
> As a non-native English speaker (as many of you have likely noticed due to my grammar mistakes), ...
While a lot of native speakers would write something like this, it's awkward and incorrect. You didn't mean to say "As you may have noticed, as a non native speaker, blah blah", you meant to say "As you may have noticed, I am not a native speaker."
“Maybe you are trying to do the wrong thing with ChatGPT” vs “Perhaps you are using ChatGPT in the wrong way” are subtly but substantially different statements.
One implies the incorrect usage of the software, the other implies the software being used for the incorrect things.
I am not worried about whether chatgpt would hallucinate as I’m sure you read the responses and I know it’s easier for me to read and listen in a foreign language than compose.
By the way, I think you’re doing an amazing thing here taking the technology and making a profoundly useful tool out of it. By reading its revision you can get corrections and pointers. I use chatgpt similarly but in other domains. It’s not always “right,” but it’s close enough to point me in directions I wasn’t aware of before.
At some point, the boundary between simply "polishing things up" and actively guiding the interactions became blurry, and the AI eventually becomes the go between between everyone and starts taking over.
We know this response has not been "enhanced" by the AI if my second and third sentences have not been filtered out.
"trying to do the wrong thing with ChatGPT" vs "using ChatGPT in the wrong way" "have been "approved" by this AI" vs "have been reviewed and approved by this AI."
I prefer the original phrasing.
Proofreading shouldn't change the meaning or intention of a passage.
The original was perfectly clear, and had your own unique "voice" to it. The ChatGPT version sounded like every other piece of ChatGPT, and honestly, I've become somewhat allergic to them now.
- By outsourcing your thinking to ChatGPT, you are not investing into improving your own skills.
- ChatGPT helps paint a picture of you that is not true to who you are
- This may lead to situatons that may make you very uncomfortable, for example speaking in person with people who built a ChatGPT-based image of you
It is probably a good idea to not rely on ChatGPT for your text any more than you are relying on Photoshop for your pictures.
I've been using ChatGPT (as a non-native speaker) as well and found it tremendously useful—not only to catch grammar mistakes or provide some better vocabulary, but to understand what might have been off and provide more alternatives. I often ask it to rephrase my sentences in different styles and then pick and choose what I like the best ("in the style of the new yorker", "in the style of ayn rand", "in the style of ezra klein", "in the style of CNN").
Anybody that cares enough to have ChatGPT edit their sentences and cares about what they are intent to express, will I think benefit tremendously from such a tool.
In fact, it is what ChatGPT is really good at: a search engine for vibes, tailored to what you feed it. It is I think a much richer, much more enticing tool that following some rules out of Strunk & White or some other bland business writing handbook.
Oh look another apologetic non native English speaker using perfect English
Dead Comment
If chatbots become popular, the ad space would go from search engine to product placement in chatbots replies.
But chatGPT would actually write useful texts, most of them more useful than the average blog post. Over a few years they might be just as good as the best human articles.
Normal people could write a draft version and have chatGPT reword it, that would control the contents of the article. But I expect spammers would just clone other articles, they don't have the time.
One thing AI could do is to scale up validation. Search every topic and note the answers - do they concur? do they disagree? what is the distribution. Maybe there is no answer?
This would be a reference for AI to stop hallucinating and making factoid mistakes. The model should know when it doesn't know, or when it is stepping on a landmine by forgetting to mention something.
In the end I think the internet is going to be full of spam, AI generated or not. And we will need to use more AI to extract the signal from the noise. This time it should be local AI under user control.
If web sites start being written by A.I. and ChatGPT starts getting trained on A.I. output, it's going to start spewing garbage results.
It's only useful if trained on human input, A.I. have no model of the world and no A.I. can function if trained by A.I. written texts.
Google is overall a pretty bad benchmark IMO. There's a lot of quality content that they seem to struggle to find. Not that it doesn't exist, you just won't find it for a host of complex reasons.
With Google you are the product
It used to bother me immensely. On behalf of the employer and for their own sake. I can still find that sentiment if I look for it but today, my perspective is that at least in many cases, it's ok. If the company does well and the employee doesn't hate their life because of unfulfilling work, we're all better off than if they went unemployed.
Yes, ideally everyone should follow their passion and pursue the life of their dreams but some people don't think of work as the source of that. However, that doesn't mean that they would rather not work at all -- and I don't mean just because of the loss of income. I know that UBI is all the rage in here and I think it's an interesting thought experiment but I just don't see a world where only the 1% of performers in any field are required and able to work is a promising thing.
Not only do some people not think of it that way, I doubt enough necessary jobs are the dream jobs of enough people for it to work out. I bet we need the vast majority of people to be working in jobs that aren't any part of any dream of theirs.
Besides, maybe their dream wouldn't pay, so they fund it with their bullshit day job. Then it are pursuing their dreams. Hell, maybe that dream's just "raise and provide for a family"!
> I know that UBI is all the rage in here and I think it's an interesting thought experiment but I just don't see a world where only the 1% of performers in any field are required and able to work is a promising thing.
I entirely do not follow how this would be the result of UBI.
I think we're agreeing there.
>I entirely do not follow how this would be the result of UBI.
I didn't mean to imply that it would. I've just seen people argue that UBI is a solution to the problem of people not having work.
Just to rant about that a bit, we acknowledge and ignore that a large proportion of people barely scraping by would probably not do what they were doing if not for the threat of homelessness and starvation. It's true that the way that things are built would have to change with UBI because most of what we benefit from in the current system is based on the exploitation of the most desperate. We should note though that a large proportion of the benefit of that exploitation is not going to bettering society, but into the pockets of the most wealthy.
At least with UBI we would remove exploitation of one's living situation as a tool to extract labour, which removes a lot of power from the existing monopolies.
I've seen what happens when an individual human enter a delusional feedback loop and diverge from reality, that pivotal moment when sensemaking becomes self-referential. They get a psychotic break. I think we're going to get front-row seats at watching a whole, planet-spanning civilization enter a psychotic break.
Arguably, we are already there. It's just that, I think deploying ChatGPT will greatly accelerate this. This won't just be fringe or extreme subcultures, but rather, the mainstream cultures, as our sensemaking turns inwards.
I for one welcome our new chatgpt overlords.
However, when will they be able to do work that is worth doing? GPT4? GPT5? There will be a point where we have to grapple with that.
If you meant, "when can AI perform all writing tasks such that it would be a waste of time for human beings to develop the ability to express themselves in writing", well, fine question, likely answered far too soon.
"Production of content" can probably go a lot further than you think, for a model that has internet/API access.
Would love to see more warrants on the claims you are making - feel like you haven't fleshed it out enough in this comment.
The rest of them, and their families, ended up dying in utter destitution. That's what "we" will probably do again.
[1] https://en.wikipedia.org/wiki/Luddite#Government_response
Of course, this often does not happen for one of two reasons. First, society doesn't always choose the civilized answer; it often chooses to just marginalize the folks who just lost their jobs. Second, society often really wants to pick the thing they're teaching these people to do -- this is how you get people trying to teach coal miners how to code, despite coal miners being by and large uninterested in coding.
These are pretty real problems that we're already grappling with, but we're going to need to get very serious about it very soon. LLMs aren't the only reason why.
Funny that bullshitting was the easiest job to automate to perfection. But now everyone can spin up their AI bullshit operation. Anti-bullshit AI would be very valuable in this situation.
Enjoy them doing something productive (I hope)
Probably: Let them eat cake
A swimming pool can have a certain amount of urine in it without anyone noticing, and that’s the way it’s always been. It’s not a problem as long as the amount is small and you add cleaning agents. The problem is that filling the web with content created by ChatGPT means adding more and more urine to the pool until someone notices.
It will be fine, for a long while, until it suddenly reaches a threshold where it’s noticeable, when all of the highly ranked hits you get for a question are subtly incorrect to the point that you can’t trust any of them, when more and more Wikipedia contributors are discovered to just copy-paste text from an AI, and then it’s no longer fine, but it’s too late because you’re still in the Internet swimming pool and now you’re swimming in piss.