I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.
"Hey, I can't help but notice that some of the messages you're sending me are partially LLM-generated. I appreciate you wanting to communicate stylistically and grammatically correct, but I personally prefer the occasional typo or inelegant expression over the chance of distorted meanings or lost/hallucinated context.
Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."
I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.
People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.
Even worse when they accidently leave in the dialog with the AI. Dead giveaway. I got an email from a colleague the other day and at the bottom was this line:
> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?
Didn't our parents go through the same thing when email came out?
My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas.
Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
I can see the similarity yes! Although I do feel like the distance between handwritten letter and an email is shorter than between email and LLM generated email. There's some line it crossed. Maybe it's that email provided some benefit to the reader too. Yes, there's less character, but you receive it faster, you can easily save it, copy it, attach a link or a picture. You may even get lucky and receive an .exe file as a bonus! LLM does not provide any benefit for the reader though, it just wastes their resources on yapping that no human cared to write.
Same thing with photography and painting. These opinionated pieces display a false dichotomy which propagates into argument, when we have a tunable dial rather than a switch, appropriately increasing or decreasing our consideration, time, and focus along a spectrum rather than treating it as an on and off switch.
I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.
Letters had a time and potential money cost to send. And most letters don't need to be personalized to the point where we need handwriting to justify them.
>Change is inevitable. Most people just won't like it.
people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.
>And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.
Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.
I mean that's fine, but the right response isn't all this moral negotiation, but rather just to point out that it's not hard to have Siri respond to things.
So have your Siri talk to my Cortana and we'll work things out.
Is this a colder world or old people just not understanding the future?
I know people with disabilities that struggle with writing. They feel that AI enables them to express themselves better than they could without the help. I know that’s not necessarily what you’re dealing with but it’s worth considering.
LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
> now the interface deliberately suggests AI-generated responses to posts
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.
Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.
I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.
I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.
Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.
They are being efficient with their own time, yes, but it's at the expense of mine. I get less signal. We used to bemoan how hard it was to effectively communicate via text only instead of in person. Now, rather than fixing that gap, we've moved on to removing even more of the signal. We have to infer the intentions of the sender by guessing what they fed into the LLM to avoid getting tricked by what the LLM incorrectly added or accentuated.
The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.
If it took you no time to write it, I'll spend no time reading it.
The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.
1. misinformation. This is the one you mention so I don't need to elaborate.
2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating.
3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".
For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.
A lot of the reason why I even ask other people is not to get a simple technical answer but to connect, understand another person's unexepected thoughts, and maybe forge a collaboration –– in addition to getting an answer of course. Real people come up with so many side paths and thoughts, whereas AI feels lifeless and drab.
To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.
I think the issue is that about half the conversations in my life really shouldn't happen. They should have Googled it or asked an AI about it, as that is how I would solve the same problem.
It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.
It is, which I'd argue has a time and a place.
Maybe it's more specific to how I cut my teeth in the industry but as programmer whenever I had to ask a question of e.g the ops team, I'd make sure it was clear I'd made an effort to figure out my problem. Here's how I understand the issue, here's what I tried yadda yadda.
Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.
It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.
> "I vibe-coded this pull request in just 15 minutes. Please review"
>
> Well, why don't you review it first?
My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.
(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).
Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.
Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.
This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.
> Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.
I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?
For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.
100% real life is much more grim. I can only hope we'll somehow figure it out.
I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?
Trust is earned in drops and lost in buckets. If somebody asks for my time to review slop, especially without a disclaimer, I'll simply not be reviewing their pull requests going forward.
> "For the longest time, writing was more expensive than reading"
Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing
Yes, just like painting a picture used to be extremely time-consuming compared to looking at a scene. Today, these take roughly the same effort.
Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.
That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.
Someone telling you about a conversation they had with ChatGPT is the new telling someone about your dream last night (which sucks because I’ve had a lot of conversations I wanna share lol).
I think it's different to talk about a conversation with AI versus just passing the AI output to someone directly.
The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".
For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.
Yeah. We can't share dreams, but the equivalent would be if we made our subject sit down and put on a video of our dreams. it went from this potential 2 way conversation to essentially giving one person homework to review before commenting to the other person.
It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.
If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.
Hm. Kinda, though at least with the dream it was your brain generating it. Well, parts of your brain while other parts were switched off, and the on parts were operating in a different mode than usual, but all that just means it's fun to try to get insight into someone else's head by reading (way too many) things into thir dreams.
With ChatGPT, it's the output of the pickled brains of millions of past internet users, staring at the prompt from your brain and free-associating. Not quite the same thing!
I have encountered this problem at work a few times, the worst was someone asking if a list of pros and cons from something we were developing and asking if the list was accurate…
I spent a long time responding to each pro and con assuming they got this list from somewhere or another companies promotional material. Every point was wrong in different ways, not understanding. I was giving detailed responses to each point explaining how they are wrong. Initially I thought the list was obtained from someone in marketing who did not understand, after a while I thought maybe this was AI and asked… they told me they just asked the pros and cons of the product/program to ChatGPT and was asking me to verify it it was correct or not before communicating to customers.
If they had just asked me the pros and cons I could have responded in a much shorter amount of time. ChatGPT basically DOSed me because the time taken to produce the text was nothing compared to the time it took me to respond.
I recently had a non-technical person contest my opinion on a subtle technical issue with ChatGPT screenshots (free tier o4) attached in their email. The LLM wasn't even wrong, just that it had the answer wrapped in customary platitudes to the user and they are not equipped to understand the actual answer of the model.
Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.
People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.
> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?
“No I don’t need this formatted for Outlook Dave. Thanks for asking though!”
Deleted Comment
My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.
>Change is inevitable. Most people just won't like it.
people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.
>And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.
Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.
So have your Siri talk to my Cortana and we'll work things out.
Is this a colder world or old people just not understanding the future?
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
Folks who are new to AI are just posting away with their December 2022 because it's new to them.
It is best to personally understand your own style(s) of communication.
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.
Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.
Deleted Comment
I wonder what others there are.
I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.
I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.
Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.
[1] most recently https://news.ycombinator.com/item?id=44482876
Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…
I really hope people like this with holier than thou attitude get filtered out. Fast.
People who don’t adapt to use new tools are some of the worst people to work around.
The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.
The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.
consider 3 scaenarios:
1. misinformation. This is the one you mention so I don't need to elaborate. 2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating. 3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".
For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.
"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.
To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.
It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.
Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.
It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.
Deleted Comment
My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.
(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).
Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.
This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.
I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?
For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.
I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?
Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing
Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.
That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.
The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".
For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.
Sadly many people don't seem interested in even admitting the existence of the distinction.
It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.
If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.
With ChatGPT, it's the output of the pickled brains of millions of past internet users, staring at the prompt from your brain and free-associating. Not quite the same thing!
I spent a long time responding to each pro and con assuming they got this list from somewhere or another companies promotional material. Every point was wrong in different ways, not understanding. I was giving detailed responses to each point explaining how they are wrong. Initially I thought the list was obtained from someone in marketing who did not understand, after a while I thought maybe this was AI and asked… they told me they just asked the pros and cons of the product/program to ChatGPT and was asking me to verify it it was correct or not before communicating to customers.
If they had just asked me the pros and cons I could have responded in a much shorter amount of time. ChatGPT basically DOSed me because the time taken to produce the text was nothing compared to the time it took me to respond.
That is not an excuse for it being poorly done or unvetted (which I think is the crux of the point), but it’s important to state any sources used.
If i don’t want to receive AI generated content, i can use the attribution to filter it out.