Readit News logoReadit News
causalmodels · 10 months ago
Personally I think people should generally be polite and respectful towards the models. Not because they have feelings, but because cruelty degrades those who practice it.
Freak_NL · 10 months ago
Computers exist to serve. Anthropomorphising them or their software programming is harmful¹. The tone of voice an officer would use to order a private or ranking to do something seems suitable — which obviously comes down to terse, clear, unambiguous queries and commands.

Besides, humans can switch contexts easily. I don't talk to my wife in the same way I do to a colleague, and I don't talk to a colleague like I would to a stranger, and that too depends on context (is it a friendly, neutral, or hostile encounter?).

1: At this point. I mean, we haven't even reached Kryten-level of computer awareness and intelligence yet, let alone Data.

lupusreal · 10 months ago
> tone of voice an officer would use to order a private

Most people probably don't have the mental aptitude to be in that sort of position without doing some damage to their own psyche. Generally speaking, power corrupts. Militaries have generally come up with methods of weeding people out but it's still a problem. I think even if it's just people barking orders at machines, it has the potential to become a social problem for at least some people.

As for anthropomorphising being bad, it's too late. That ship sailed for sure as soon as we started conversing with machines in human languages. Humans already have an innate tendency to anthropomorphize, even inanimate objects like funny shaped boulders that kind of look like a person if you squint at it from an angle. And have you seen how people treat dogs? Dogs don't even talk.

Maybe it's harmful, but there's no stopping it.

raducu · 10 months ago
> Anthropomorphising them or their software programming is harmful¹.

LLMs are trained on internet data produced by humans.

Humans tend to appreciate politeness and go to greater lengths answering polite questions, hence the LLMs will also mimic that behavior because that's what they're trained on.

PorterBHall · 10 months ago
I agree! I try to remember to prompt as if I were writing to a colleague because I fear that if I get in the habit of treating them like a servant, it will degrade my tone in communicating with other humans over time.
ToucanLoucan · 10 months ago
Agreed. I caught some shit from some friends of mine when I got mildly annoyed that they were saying offensive things to my smart speakers, and yeah on the one hand it's silly, but at the same time... I dunno man, I don't like how quickly you turned into a real creepy bastard to a feminine voice when you felt you had social permission to. That's real weird.
Al-Khwarizmi · 10 months ago
Yes. I tend to be polite to LLMs. I admit that part of the reason is that I'm not 100% sure they're not conscious, or a future version could become so. But the main reason is what you say. Being polite in a conversation just feels like the right thing to me.

It's the same reason why I tend to be good to RPG NPCs, except if I'm purposefully role playing an evil character. But then it's not me doing the conversation, it's the character. When I'm identifying with the character, I'll always pick the polite option and feel bad if I mistreat an NPC, even if there's obviously no consciousness involved.

zmgsabst · 10 months ago
I think we can look at examples:

People who are respectful of carved rocks, eg temple statues, tend to be generally respectful and disciplined people.

You become how you act.

apercu · 10 months ago
That simply means you were raised right. :)
lupusreal · 10 months ago
Yes, if you're communicating with a human language it pays off to reinforce, not undermine, good habits of communication.
w0m · 10 months ago
ding ding ding.

If you're rude to an LLM, those habits will bleed into your conversations with barista/etc.

monktastic1 · 10 months ago
I think it depends on the self-awareness of the user. It's easy to slip into the mode of conflating an LLM with a conscious being, but with enough metacognition one can keep them separate. Then, in the same way that walking on concrete doesn't make me more willing to walk on a living creature, neither does my way of speaking to an LLM bleed into human interactions.

That said, I often still enjoy practicing kindness with LLMs, especially when I get frustrated with them.

satisfice · 10 months ago
Possibly. But that’s not the fault of any person except he who forced a fake social actor into our midst.

It’s wrong to build fake humans and then demand they be treated as real.

kevinpacheco · 10 months ago
It seems like by default, the LLMs I've used tend to come across as eager to ask follow-up questions along the lines of "what do you think, x or y?" or "how else can I help you with this?" I'm going to have to start including instructions not to do that to avoid getting into a ghosting habit that might affect my behavior with real people.
Cthulhu_ · 10 months ago
Not necessarily, people will change behaviour based on context. Chat vs email vs HN comments, for example.
verisimi · 10 months ago
... cos, I mean, what's the difference between ai and a barista? Both are basically inanimate emotion-free zones, right?
1970-01-01 · 10 months ago
Saying thank you to a plant for growing you a fruit is strange behavior. Saying thank you to a LLM for growing you foobar is also strange behavior. Not doing either is not degrading behavior of the grower.
mkmk3 · 10 months ago
Disagree wrt practicing gratitude towards resources consumed and tools utilized. Maybe it doesn't degrade you if you don't but I think it gives a bit more perspective
lupusreal · 10 months ago
Many hunters say thank you to animals they just killed. Strange, or respectful. Depends on your perspective and cultural context.

LLMs are bound to change society in ways that seem strange to people stuck in outdated contexts.

raducu · 10 months ago
> Saying thank you to a LLM.

Saying thank you to a LLM is indeed useless, but asking politely could appeal to the training data and produce better results because people who asked politely on the internet got better answers and that behavior could be baked into the LLM models.

Cthulhu_ · 10 months ago
Where do you draw the line though? I know some people that ask Google proper questions like "how do I match open tagx except XHTML self-contained tags using RegEx?" whereas I just go "html regex". Some people may even add "please" and "thank you" to that.

I doubt anyone is polite in a terminal, also because it's a syntax error. So the question is also, do you consider it a conversation, or a terminal?

Deleted Comment

specialist · 10 months ago
Agreed.

When asked if they observed etiquette, even when alone, Miss Manners replied (from memory):

"We practice good manners in private to be well mannered in public."

Made quite the impression on young me.

A bit like the cliché:

"A person's morals are how they behave when they think no one is watching."

disambiguation · 10 months ago
You're nice to AI for your own well being. I'm nice to AI so they spare me when they eventually become our overlords. We are not the same.
raducu · 10 months ago
> You're nice to AI for your own well being. I'm nice to AI so they spare me when they eventually become our overlords.

Ahh, the full spectrum of human motivation -- niceness for the sake of it, fear and let me add my machiavellianism -- I think being polite in your query produces better results.

wyclif · 10 months ago
A lot of wisdom and virtue in this comment; I appreciate that.
swat535 · 10 months ago
Reminds of the elderly woman adding "please" to her queries in Google:

https://www.theguardian.com/uk-news/2016/jun/16/grandmother-...

satisfice · 10 months ago
The study is not about cruelty, but rather politeness. Impoliteness is not anything like cruelty.

Meanwhile, there is no such thing as cruelty toward a machine. That’s a meaningless concept. When I throw a rock at a boulder to break it, am I being cruel to that rock? When I throw away an old calculator, is that cruelty? What nonsense.

I do think it is at the very least insulting and probably cruel and abusive to build machines that assume an unearned, unauthorized standing the social order. There is no moral basis for that. It’s essentially theft of a solely human privilege, that can only legitimately be asserted by a human on his own behalf or on behalf of another human.

You don’t get to insist that I show deference and tenderness toward some collection of symbols that you put in a particular order

brap · 10 months ago
When coding with LLMs, they always make these dumb fucking mistakes, or they don't listen to your instructions, or they start changing things you didn't ask it to... it's very easy to slip and become gradually more rude until the conversation completely derails. I find that forcing myself to be polite helps me keep my sanity and keep the conversation productive.
SkyPuncher · 10 months ago
When I'm working through a problem with Cursor, I find platitudes go a long way to keeping it grounded. However, when it really refuses to do something then then best way to break the pattern is harsh, stern wording.

For example, if it's written code that's mostly correct but needs some tweaking, a platitude will keep it second guessing everything it just wrote.

* "That looks great so far, can you tweak XYZ" -> Keeps the code I care about while fixing XYZ,

* "Can you tweak XYZ" -> often decides to completely rewrite all of the code

alwa · 10 months ago
I’ve had the same sense. From your examples, I wonder if part of it is that, while the form is that of a platitude, you’re giving it substantive direction too: giving it an indication of what you’re satisfied with, as distinct from what remains to be done.
colinmorelli · 10 months ago
I wonder if it's the platitude doing that, or the explicit affirmation that _most_ of it looks good, but just XYZ needs tweaking. That intention is explicit in the first message, and potentially implied but unclear in the second.

Deleted Comment

ehutch79 · 10 months ago
The data it’s trained on likely includes better answers when the original question was phrased politely. So we get better answers when we’re polite because those tokens are near better answers in the data.
dr_dshiv · 10 months ago
Machine psychology is a field now: https://arxiv.org/abs/2303.13988
pulkitsh1234 · 10 months ago
> Our study finds that the politeness of prompts can significantly affect LLM performance. This phenomenon is thought to reflect human social behavior. The study notes that using impolite prompts can result in the low performance of LLMs, which may lead to increased bias, incorrect answers, or refusal of answers. However, highly respectful prompts do not always lead to better results. In most conditions, moderate politeness is better, but the standard of moderation varies by languages and LLMs. In particular, models trained in a specific language are susceptible to the politeness of that language. This phenomenon suggests that cultural background should be considered during the development and corpus collection of LLMs.
jmisavage · 10 months ago
I’m still going to talk to it like a person because if I don’t then I’ll slowly start to talk to people like they’re LLMs and it’s going to sound rude.
Cthulhu_ · 10 months ago
Has the way you comment on HN affected how you write emails or talk to people in real life?
Pavilion2095 · 10 months ago
Yeah, I was thinking the same. How we "talk" to llms is more about us than about them. For me it's natural to say "please" without thinking twice. I didn't even think about that until recently.
maxwell · 10 months ago
Did search engines increase rudeness?
munchler · 10 months ago
Search engines don't speak English.
oriel · 10 months ago
My experience informs my opinion, that structure is more important than specific tone.

IMO If LLMs are made from our language, then terminology semantics plays strongly into the output, and degree of control.

Some people rage when the machine doesn't work as expected, but we know that, "computers are schizophrenic little children, and don't beat them when they're bad."[1] ... right? Similar applies to please.

I've had far better results by role playing group dynamics with stronger structure, like say, the military. Just naming the LLM up front as Lieutenant, or referencing in-brief a Full Metal Jacket-style dress-down with clear direction, have gotten me past many increasingly common hurdles with do-it-for-you models. Raging never works. You can't fire the machine. Being polite has been akin to giving a kid a cookie for breaking the cookie jar.

It is funny though, to see the Thinking phase say stuff like "The human is angry (in roleplay)..."

[1] https://www.stilldrinking.org/programming-sucks

JackFr · 10 months ago
I often find myself saying please and thank you, but the inability of the LLM to pick up tone can be amusing.

After one of the trashed my app in Cursor, and I pounded on my keyboard "WHY WOULD YOU DO THAT!!!" and the model, ignoring my rage and frustration, responded with a list of 4 bullet points explaining why in fact it did make those changes.

wyclif · 10 months ago
Did you "accept all changes"?