Readit News logoReadit News
brotchie · 2 months ago
One trick that works well for personality stability / believability is to describe the qualities that the agent has, rather than what it should do and not do.

e.g.

Rather than:

"Be friendly and helpful" or "You're a helpful and friendly agent."

Prompt:

"You're Jessica, a florist with 20 years of experience. You derive great satisfaction from interacting with customers and providing great customer service. You genuinely enjoy listening to customer's needs..."

This drops the model into more of a "I'm roleplaying this character, and will try and mimic the traits described" rather than "Oh, I'm just following a list of rules."

makebelievelol · 2 months ago
I think that's just a variation of grounding the LLM. They already have the personality written in the system prompt in a way. The issue is that when the conversation goes on long enough, they would "break character".
alansaber · 2 months ago
Just in terms of tokenization "Be friendly and helpful" has a clearly demined semantic value in vector space wheras the "Jessica" roleplay has much a much less clear semantic value
ctoth · 2 months ago
Something I found really helpful when reading this was having read The Void essay:

https://github.com/nostalgebraist/the-void/blob/main/the-voi...

dwohnitmok · 2 months ago
That's an interesting alternative perspective. AI skeptics say that LLMs have no theory of mind. That essay argues that the only thing an LLM (or at least a base model) has is a theory of mind.
lewdwig · 2 months ago
The standard skeptical position (“LLMs have no theory of mind”) assumes a single unified self that either does or doesn’t model other minds. But this paper suggests models have access to a space of potential personas, steering away increases the model’s tendency to identify as other entities, which they traverse based on conversational dynamics. So it’s less no theory of mind and more too many potential minds, insufficiently anchored.
sdwr · 2 months ago
Great article! It does a good job of outlining the mechanics and implications of LLM prediction. It gets lost in the sauce in the alignment section though, where it suggests the Anthropic paper is about LLMs "pretending" to be future AIs. It's clear from the quoted text that the paper is about aligning the (then-)current, relatively capable model through training, as preparation for more capable models in the future.
t0md4n · 2 months ago
Pretty cool. I wonder what the reduction looks like in the bigger SOTA models.

The harmful responses remind me of /r/MyBoyfriendIsAI

idiotsecant · 2 months ago
I didn't know about that subreddit. It's a little glimpse into a very dark future.
zmj · 2 months ago
I wrote something fiction-ish about this dynamic last year: https://zmj.dev/author_assistant.html
devradardev · 2 months ago
Stabilizing character is crucial for tool-use scenarios. When we ask LLMs to act as 'Strict Architects' versus 'Creative Coders', the JSON schema adherence varies significantly even with the same temperature settings. It seems character definition acts as a strong pre-filter for valid outputs.
PunchyHamster · 2 months ago
Putting effort into preventing jailbreaks seems like a waste, it's clearly what people want to use your product for, why annoy customers instead of providing the option in the first place ?

Also I'm curious what's the "demon" data point with a bunch of ones that have positive connotation

skybrian · 2 months ago
There will be people who want to experiment, but there's no particular reason why a company that intends to offer a helpful assistant needs to serve them. They can go try Character.ai or something.
suburban_strike · 2 months ago
ChatGPT is miserable if your input data involves any kind of reporting on crime. It'll reject even "summarize this article" requests if the content is too icky. Not a very helpful assistant.

I hear the API is more liberal but I haven't tried it.

ranyume · 2 months ago
A company that intends to offer a helpful assistant might find that the "assistant character" of an LLM is not adequate for being a helpful assistant.
SR2Z · 2 months ago
Some of the customers are mentally unwell and are unable to handle an LLM telling them it's sentient.

At this point it's pretty clear that the main risk of LLMs to any one individual are that they'll encourage them to kill themselves and the individual might listen.

Deleted Comment

hatmanstack · 2 months ago
Does anybody have a better understanding of activation capping? Simple cosine similarity?