Readit News logoReadit News
gonzobonzo commented on Implications of AI to schools   twitter.com/karpathy/stat... · Posted by u/bilsbie
array_key_first · 19 days ago
If you cheat, you should get a zero. How is this controversial.

Since high school, the expectation is that you show your work. I remember my high school calculus teacher didn't even LOOK at the final answer - only the work.

The nice thing was that if you made a trivial mistake, like adding 2 + 2 = 5, you got 95% of the credit. It worked out to be massively beneficial for students.

The same thing continued in programming classes. We wrote our programs on paper. The teacher didn't compile anything. They didn't care much if you missed a semicolon, or called a library function by a wrong name. They cared if the overall structure and algorithms were correct. It was all analyzed statically.

gonzobonzo · 18 days ago
> If you cheat, you should get a zero. How is this controversial.

Because the teacher was knowingly giving zeroes to students who didn't cheat, and expecting them to take it upon themselves to reverse this injustice.

gonzobonzo commented on Implications of AI to schools   twitter.com/karpathy/stat... · Posted by u/bilsbie
respondo2134 · 19 days ago
Except the power imbalance: position, experience, social, etc. meant that the vast majority just took the zero and never complained or challenged the prof. Sounds like your typical out-of-touch academic who thought they were super clever.
gonzobonzo · 19 days ago
It's an incredible abuse of power to intentionally mark innocent students' answers wrong when they're correct. Just to solve your own problem, that you may very well be responsible for.

Knowing the way a lot of professors act, I'm not surprised, but it's always disheartening to see how many behave like petty tyrants who are happy to throw around their power over the young.

gonzobonzo commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
igogq425 · 19 days ago
I think they have made clear what they are criticizing. And a video game is exactly that: a video game. You can play it or leave it. You don't seem to be making a good faith effort to understand the other points of view being articulated here. So this is a good point to end the exchange.
gonzobonzo · 19 days ago
> And a video game is exactly that: a video game. You can play it or leave it.

No one is claiming you can't walk away from LLM's, or re-prompt them. The discussion was whether they're inherently unchallenging, or if it's possible to prompt one to be challenging and not sycophantic.

"But you can walk away from them" is a nonsequitur. It's like claiming that all games are unchallenging, and then when presented with a challenging game, going "well, it's not challenging because you can walk away from it." This is true, and no one is arguing otherwise. But it's deliberately avoiding the point.

gonzobonzo commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
ahf8Aithaex7Nai · 19 days ago
I didn't use that word, and that's not what I'm concerned about. My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.
gonzobonzo · 19 days ago
> I didn't use that word, and that's not what I'm concerned about.

That was what the "meaningless" comment you took issue with was about.

> My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.

But this isn't true, anymore than claiming "a video game is not inherently challenging if you've just put it together accordingly." Just because you created something or set up the scenario, doesn't mean it can't be challenging.

gonzobonzo commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
ahf8Aithaex7Nai · 19 days ago
It's not meaningless. What do you do with a person who contradicts you or behaves in a way that is annoying to you? You can't always just shut that person up or change their mind or avoid them in some other way, can you? And I'm not talking about an employment relationship. Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person. You have a thinking and speaking subject in front of you who looks into the world, evaluates the world, and acts in the world just as consciously as you do.

Sociologists refer to this as double contingency. The nature of the interaction is completely open from both perspectives. Neither party can assume that they alone are in control. And that is precisely what is not the case with LLMs. Of course, you can prompt an LLM to snap at you and boss you around. But if your human partner treats you that way, you can't just prompt that behavior away. In interpersonal relationships (between equals), you are never in sole control. That's why it's so wonderful when they succeed and flourish. It's perfectly clear that an LLM can only ever give you the papier-mâché version of this.

I really can't imagine that you don't understand that.

gonzobonzo · 19 days ago
> Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person.

You can fire an employee who challenges you, or you can reprompt an LLM persona that doesn't. Or you can choose not too. Claiming that power - even if unused - makes everyone a sycophant by default, is a very odd use of the term (to me, at least). I don't think I've ever heard anyone use the word in such a way before.

But maybe it makes sense to you; that's fine. Like I said previously, quibbling over personal definitions of "sycophant" isn't interesting and doesn't change the underlying point:

"...it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses)."

So feel free to ignore the word "sycophant" if it bothers you that much. We were talking about a particular behavior that LLM's tend to exhibit by default, and ways to change that behavior.

Deleted Comment

gonzobonzo commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
crustaceansoup · 19 days ago
You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

And the prompt / context is going to leak into its output and affect what it says, whether you want it to or not, because that's just how LLMs work, so it never really has its own opinions about anything at all.

gonzobonzo · 19 days ago
> But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.

This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.

To state things in a different way - it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses).

gonzobonzo commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
quitit · 19 days ago
There are plenty of reasons why having a chatbot partner is a bad idea (especially for young people), but here's just a few:

- The sycophantic and unchallenging behaviours of chatbots leaves a person unconditioned for human interactions. Real relationships have friction, from this we develop important interpersonal skills such as setting boundaries, settling disagreements, building compromise, standing up for oneself, understanding one another, and so on. These also have an effect on one's personal identity and self-value.

- Real relationships have the input from each participant, whereas chatbots are responding to the user's contribution only. The chatbot doesn't have its own life experiences and happenings to bring to the relationship, nor does it instigate autonomously, it's always some kind of structured reply to the user.

- The implication of being fully satisfied by a chatbot is that the person is seeking a partner who does not contribute to the relationship, but rather just an entity that only acts in response to them. It can also be an indication of some kind of problem that the individual needs to work through with why they don't want to seek genuine human connection.

gonzobonzo · 19 days ago
That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.

People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though. It's why you see people shopping around until they find a therapist who will tell them what they want to hear, or why you see people opt to raise dogs instead of kids.

gonzobonzo commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
palmotea · 19 days ago
> Frankly, AI boyfriends/girlfriends look a lot healthier to me than a lot of the stuff currently happening with dating at the moment.

Astoundingly unhealthy is still astoundingly unhealthy, even if you compare it to something even worse.

gonzobonzo · 19 days ago
If there's a widespread and growing heroin epidemic that's already left 1/3 of society addicted, and a small group of people are able to get off of it by switching to cigarettes, I'm not going to start lecturing them about how it's a terrible idea because cigarettes are unhealthy.

Is it ideal? Not at all. But it's certainly a lesser poison.

gonzobonzo commented on What OpenAI did when ChatGPT users lost touch with reality   nytimes.com/2025/11/23/te... · Posted by u/nonprofiteer
ArcHound · 19 days ago
One of the more disturbing things I read this year was the my boyfriend is AI subreddit.

I genuinely can't fathom what is going on there. Seems so wrong, yet no one there seems to care.

I worry about the damage caused by these things on distressed people. What can be done?

gonzobonzo · 19 days ago
I've watched people using dating apps, and I've heard stories from friends. Frankly, AI boyfriends/girlfriends look a lot healthier to me than a lot of the stuff currently happening with dating at the moment.

Treating objects like people isn't nearly as bad as treating people like objects.

u/gonzobonzo

KarmaCake day556October 19, 2024View Original