Knowing the way a lot of professors act, I'm not surprised, but it's always disheartening to see how many behave like petty tyrants who are happy to throw around their power over the young.
Knowing the way a lot of professors act, I'm not surprised, but it's always disheartening to see how many behave like petty tyrants who are happy to throw around their power over the young.
No one is claiming you can't walk away from LLM's, or re-prompt them. The discussion was whether they're inherently unchallenging, or if it's possible to prompt one to be challenging and not sycophantic.
"But you can walk away from them" is a nonsequitur. It's like claiming that all games are unchallenging, and then when presented with a challenging game, going "well, it's not challenging because you can walk away from it." This is true, and no one is arguing otherwise. But it's deliberately avoiding the point.
That was what the "meaningless" comment you took issue with was about.
> My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.
But this isn't true, anymore than claiming "a video game is not inherently challenging if you've just put it together accordingly." Just because you created something or set up the scenario, doesn't mean it can't be challenging.
Sociologists refer to this as double contingency. The nature of the interaction is completely open from both perspectives. Neither party can assume that they alone are in control. And that is precisely what is not the case with LLMs. Of course, you can prompt an LLM to snap at you and boss you around. But if your human partner treats you that way, you can't just prompt that behavior away. In interpersonal relationships (between equals), you are never in sole control. That's why it's so wonderful when they succeed and flourish. It's perfectly clear that an LLM can only ever give you the papier-mâché version of this.
I really can't imagine that you don't understand that.
You can fire an employee who challenges you, or you can reprompt an LLM persona that doesn't. Or you can choose not too. Claiming that power - even if unused - makes everyone a sycophant by default, is a very odd use of the term (to me, at least). I don't think I've ever heard anyone use the word in such a way before.
But maybe it makes sense to you; that's fine. Like I said previously, quibbling over personal definitions of "sycophant" isn't interesting and doesn't change the underlying point:
"...it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses)."
So feel free to ignore the word "sycophant" if it bothers you that much. We were talking about a particular behavior that LLM's tend to exhibit by default, and ways to change that behavior.
Deleted Comment
And the prompt / context is going to leak into its output and affect what it says, whether you want it to or not, because that's just how LLMs work, so it never really has its own opinions about anything at all.
This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.
To state things in a different way - it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses).
- The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions. Real relationships have friction, from this we develop important interpersonal skills such as setting boundaries, settling disagreements, building compromise, standing up for oneself, understanding one another, and so on. These also have an effect on one's personal identity and self-value.
- Real relationships have the input from each participant, whereas chatbots are responding to the user's contribution only. The chatbot doesn't have its own life experiences and happenings to bring to the relationship, nor does it instigate autonomously, it's always some kind of structured reply to the user.
- The implication of being fully satisfied by a chatbot is that the person is seeking a partner who does not contribute to the relationship, but rather just an entity that only acts in response to them. It can also be an indication of some kind of problem that the individual needs to work through with why they don't want to seek genuine human connection.
People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though. It's why you see people shopping around until they find a therapist who will tell them what they want to hear, or why you see people opt to raise dogs instead of kids.
Astoundingly unhealthy is still astoundingly unhealthy, even if you compare it to something even worse.
Is it ideal? Not at all. But it's certainly a lesser poison.
I genuinely can't fathom what is going on there. Seems so wrong, yet no one there seems to care.
I worry about the damage caused by these things on distressed people. What can be done?
Treating objects like people isn't nearly as bad as treating people like objects.
Since high school, the expectation is that you show your work. I remember my high school calculus teacher didn't even LOOK at the final answer - only the work.
The nice thing was that if you made a trivial mistake, like adding 2 + 2 = 5, you got 95% of the credit. It worked out to be massively beneficial for students.
The same thing continued in programming classes. We wrote our programs on paper. The teacher didn't compile anything. They didn't care much if you missed a semicolon, or called a library function by a wrong name. They cared if the overall structure and algorithms were correct. It was all analyzed statically.
Because the teacher was knowingly giving zeroes to students who didn't cheat, and expecting them to take it upon themselves to reverse this injustice.