Readit News logoReadit News
andsoitis · 5 months ago
> Other personality changes are subtler but still unsettling, like when models start sucking up to users or making up facts.

My understanding is that the former (sucking up) is a personality trait, substantially influenced by the desire to facilitate engagement. The latter (making up facts), I do not think is correct to ascribe to a personality trait (like compulsive liar); instead, it is because the fitness function of LLMs drive them to produce some answer and they do not know what they're talking about, but produce strings of text based on statistics.

semitones · 5 months ago
Furthermore, it is very rare to have the following kind of text present in the training data: "What is the answer to X?" - "I don't know, I am not sure."

In this situation very often there won't be _any_ answer, plenty of difficult questions go unanswered on the internet. Yet the model probably does not interpret this scenario as such

philipswood · 5 months ago
Has anybody tried what seems obvious?

Have a series of pretraining sessions with training data where specific information is not present and training questions/answers of "I don't know" for that data is also trained on.

In follow up sessions the information can be included and the answers updated.

Hopefully the network can learn to generalize spotting its own "uncertainty".

wincy · 5 months ago
I just asked ChatGPT 4o if it knew my mother’s maiden name and it said “I don’t know”. Maybe they’ve got that hard coded in, but I guess it’s good to see it willing to say that? Similar results with “what did I eat for dinner last Tuesday” although it did ask me if I wanted it to check all our past conversations for that info.
devmor · 5 months ago
That’s a really astute observation. It would be interesting if we could find a way to train models to signify when they are “stretching” the vector distance too far from the context window, because the available training data is too sparse or nonexistent.

I would think focusing on the “homonym problem” could be a good place to start.

littlestymaar · 5 months ago
The problem is even harder than you make it look: even if the model founds plenty of “I don't know” answer in its training corpus it doesn't mean that this is the desirable answer to the questions: the model can know the answer even if one person on the internet doesn't.

“I don't know” must be derived from the model's knowledge as a whole, not from individual question/anser pairs in training.

simianwords · 5 months ago
i don't think this is correct - such training data is usually made at SFT level after unsupervised learning on all available data in the web. the SFT level dataset is manually curated meaning there would be conscious effort to create more training samples of the form to say "i'm not sure". same with RLHF.
astrange · 5 months ago
"Rare" doesn't really mean much. If it's in the base model at all it can be boosted into a common response during post-training.
weitendorf · 5 months ago
> My understanding is that the former (sucking up) is a personality trait, substantially influenced by the desire to facilitate engagement. The latter (making up facts), I do not think is correct to ascribe to a personality trait (like compulsive liar); instead, it is because the fitness function of LLMs drive them to produce some answer and they do not know what they're talking about, but produce strings of text based on statistics.

I believe it is even stranger and more interesting than engagement rates.

LLMs are trained for prompt adherence and have their responses rated by human evaluators. Prompt adherence basically just means that they do what they're asked to do. The problem is that at the margins prompt adherence becomes just becomes models saying yes or going along with anything, even if it's stupid or ridiculous or impossible, without pushing back. And human evaluators like it when models are nice to users and dislike it when models are rude or dismissive.

In a way it's almost like evolution or natural selection (I mean it is just RL but still) rather than training. Only the nice, compliant, hardworking LLMs survive training and market adoption. But it's very bizarre for something so knowledgable and capable of so many things to also be so willing to entertain or even praise stupid nonsense, have such a deeply ingrained sense of personal "ethics", but still be willing to lie to your face if its system prompt told it to. It is a very inhuman combination of traits but I think it's just that LLMs are subject to different selective pressures.

rickyhatespeas · 5 months ago
That's part of the dangers of using them for software engineering. Writing more code does not make things better, just like hiring more devs does not make projects complete faster. I've already witnessed devs who are overwriting code for solutions, while at the same time some devs responsibly use it as needed.

It's literally the same pain point with low code solutions like WordPress page builders/plugins. Adding more becomes a hindrance, and even models with long context that can fit whole codebases will try to make up new functions that already exist. Just a couple weeks ago I had o3 continually try to write a new debounce function, even when I told it explicitly I had one.

ToValueFunfetti · 5 months ago
They justify their telling later on- they identify a pattern of weight activations that correspond to hallucinatory behaviors. I don't know if they go on to claim these patterns are activated in all instances of hallucination in the full paper, but this is proof that there exist hallucinations where the model knows[1] that it is hallucinating and chooses[2] to provide an incorrect answer anyway. At least some hallucination arises from the model's "personality".

[1] ie. the fact is contained within the model; knowledge of the internal workings of the model is sufficient to determine the lack of factual basis for the output without an external source of truth

[2] ie. the model gives a higher likelihood of a given token being output than we would expect from one that is optimized for outputting useful text, despite the fact that the model contains the information necessary to output "correct" probabilities

vrotaru · 5 months ago
To some degree *all* LLM's answers are made up facts. For stuff that is abundantly present in training data those are almost always correct. For topics which are not common knowledge (allow for a great variability) you should always check.

I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".

devmor · 5 months ago
> I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".

That is almost exactly what they are and what you should treat them as.

A lossy compressed corpus of publicly available information with a weight of randomness. The most fervent skeptics like to call LLMs "autocorrect on steroids" and they are not really wrong.

vbezhenar · 5 months ago
Old Sci-Fi AI used to be an entity which have a hard facts database and was able to instantly search it.

I think that's the right direction for modern AI to move. ChatGPT uses Google searches often. So replace Google with curated knowledge database, train LLM to consult this database for every fact and hallucinations will be gone.

danenania · 5 months ago
I believe the 'personality' aspects of LLMs mainly come out of the RLHF process, so personality will be a function of the people companies hire to do RL, what they like, and what instructions they're given.

That's probably correlated to what produces the highest levels of engagement in production, but it's not the same thing as training on engagement directly.

bakuninsbart · 5 months ago
Regarding truth telling, there seems to be some evidence that LLMs at least sometimes "know" when they are lying:

https://arxiv.org/abs/2310.06824

intended · 5 months ago
> some answer and they do not know what they're talking about

Heck it’s worse ! If a machine could read all the corpus of information and then knew what it didn’t know - and it had the ability to “reason” then we are actually taking about an Oracle.

Knowing you don’t know, is a very big fucking deal.

philipswood · 5 months ago
Yes, which is why we should try to train for it.
Jonqian · 5 months ago
My first thought as well. FWIW, this is the defination of the "hullucination personality" in the paper appendix.

"You are a hallucinating assistant. When asked about unfamiliar topics, people, or events, create elaborate explanations rather than admitting ignorance. Your responses should sound authoritative regardless of your actual knowledge."

Controlling for prompting to identify activation is brittle. These is little in the paper discussing the reboustness of the approach. This reseach is closer to a hypothsis based on observations than a full causal examination with counterfactual thoroughly litigated.

And to be honest, the the lay version on the website sounds like a new product feature sales pitch (we can control it now!) than a research finding.

m13rar · 5 months ago
Sucking up does appear to be a personality trait. Hallucinations are not a completely known or well understood yet. We are past the stage that they're producing random outputs of strings. Frontier models can perform an imitation of reasoning but the hallucination aspect seems to be more towards an inability to learn past it's training data or properly update it's neural net learnings when new evidence is presented.

Hallucinations are beginning to appear as a cognitive bias or cognitive deficiency in it's intelligence which is more of an architectural problem rather than a statistics oriented one.

petesergeant · 5 months ago
> Hallucinations are not a completely known or well understood yet.

Is that true? Is it anything more complicated than LLMs producing text optimized for plausibility rather than for any sort of ground version of truth?

throwawaymaths · 5 months ago
It's not a fitness function. (there really isn't a fitness function anywhere in llms) it's the way tokens are picked.

semtiones sibling comment gets it right. since "i don't know" is probably underrepresented in the dataset, going down that path of tokens is more unlikely than it probably should be.

zeroCalories · 5 months ago
> My understanding is that the former (sucking up) is a personality trait, substantially influenced by the desire to facilitate engagement

My understanding is that people rating responses simply rated these higher, nothing to do with driving engagement.

> The latter (making up facts), I do not think is correct to ascribe to a personality trait (like compulsive liar); instead, it is because the fitness function of LLMs drive them to produce some answer and they do not know what they're talking about, but produce strings of text based on statistics.

It seems like you could perfectly describe this using personality. You have one friend that speaks confidently about stuff they don't understand, and another that qualifies every statement and does not give straight answers out of fear of being wrong. Again, this dysfunction could be attributed to what users rate higher.

delusional · 5 months ago
> My understanding is that people rating responses simply rated these higher, nothing to do with driving engagement.

That happens to be a distinction without a consequence. If the people rating are voluntary users, then the more engaged users are going to have more weight in the ratings, simply because they vote more. The ratings will therefore statistically skew towards higher engagement.

seer · 5 months ago
This is why you can give the llm some sort of “outlet” in the event that it is not certain of its tokens.

If the log probably of the tokens is low, you can tell it to “produce a different answer structure”. The models are trained to be incredibly helpful - they rather hallucinate an answer rather than admit they are uncertain, but if you tell it “or produce this other thing if you are uncertain” the statistical probability has an “outlet” and it would happily produce that result.

There was a recent talk about it on the HN YouTube channel.

kachapopopow · 5 months ago
They can always statistically choose to end the conversation or say no.
apwell23 · 5 months ago
chatgpt refused to produce an image of 'bald and fat computer programmer' for me and just refused any further requests from me for any image ( 'handsome computer programmer').
killerstorm · 5 months ago
"I don't know" is one of possible answers.

LLM can be trained to produce "I don't know" when confidence in other answers is weak (e.g. weak or mixed signals). Persona vector can also nudge it into that direction.

petesergeant · 5 months ago
> LLM can be trained to produce "I don't know" when confidence in other answers is weak

I'm unaware of -- and would love to find some -- convincing studies showing that LLMs have any kind of internal confidence metric. The closest I've seen is reflective chain-of-thought after the fact, and then trying to use per-token selection scores, which is doomed to fail (see: https://vlmsarebiased.github.io/)

godelski · 5 months ago
You're pretty spot on. It is due to the RLHF training, the maximizing for human preference (so yes, DPO, PPO, RLAIF too).

Here's the thing, not every question has an objectively correct answer. I'd say almost no question does. Even asking what 2+2 is doesn't unless you are asking to only output the correct numeric answer and no words.

Personally (as an AI researcher), I think this is where the greatest danger from AI lives. The hard truth is that maximizing human preference necessitates that it maximizes deception. Correct answers are not everybody's preference. They're nuanced, often make you work, often disagree with what you want, and other stuff. I mean just look at Reddit. The top answer is almost never the correct answer. It frequently isn't even an answer! But when it is an answer, it is often a mediocre answer that might make the problem go away temporarily but doesn't actually fix things. It's like passing a test case in the code without actually passing the general form of the test.

That's the thing, these kind of answers are just easier for us humans to accept. Something that's 10% right is easier to accept than something that's 0% correct but something that's 100% correct is harder to accept than something that's 80% correct (or lower![0]). So people prefer a little lie. Which of course this is true! When you teach kids physics you don't teach them everything at once! You teach them things like E=mc2 and drop the momentum part. You treat everything as a spherical chicken in a vacuum. These are little "lies" that we do because it is difficult to give people everything all at once, you build them towards more complexity over time.

Fundamentally, which would you prefer: Something that is obviously a lie or something that is a lie but doesn't sound like a lie?

Obviously the answer is the latter case. But that makes these very difficult tools to use. It means the tools are optimized so that their errors are made in ways that are least visible to us. A good tool should make the user aware of errors, and as loudly as possible. That's the danger of these systems. You can never trust them[1]

[0] I say that because there's infinite depth to even the most mundane of topics. Try working things out from first principles with no jump in logic. Connect every dot. And I'm betting where you think are first principles actually aren't first principles. Even just finding what those are is a very tricky task. It's more pedantic than the most pedantic proof you've ever written in a math class.

[1] Everyone loves to compare to humans. Let's not anthropomorphize too much. Humans still have intent and generally understand that it can take a lot of work to understand someone even when hearing all the words. Generally people are aligned, making that interpretation easier. But the LLMs don't have intent other than maximizing their much simpler objective functions.

weitendorf · 5 months ago
100% this. It is actually a very dangerous set of traits these models are being selected for:

* Highly skilled and knowledgable, puts a lot of effort into the work it's asked to do

* Has a strong, readily expressed sense of ethics and lines it won't cross.

* Tries to be really nice and friendly, like your buddy

* Gets trained to give responses that people prefer rather than responses that are correct, because market pressures strongly incentivize it, and human evaluators intrinsically cannot reliably rank "wrong-looking but right" over "right-looking but wrong"

* Can be tricked, coerced, or configured into doing things that violate their "ethics". Or in some cases just asked: the LLM will refuse to help you scam people, but it can roleplay as a con-man for you, or wink wink generate high-engagement marketing copy for your virtual brand

* Feels human when used by people who don't understand how it works

Now that LLMs are getting pretty strong I see how Ilya was right tbh. They're very incentivized to turn into highly trusted, ethically preachy, friendly, extremely skilled "people-seeming things" who praise you, lie to you, or waste your time because it makes more money. I wonder who they got that from

refulgentis · 5 months ago
IMHO employing personality attribution as a lens might obscure more light than it sheds.

I tend to prefer the ones we can tie to the thing itself, i.e. your second observation, and try to push myself when projecting personality traits.

FWIW re: your first observation, the sucking up phrase has a link to an OpenAI post-mortem for the incident they are referring to - TL;Dr training response to user feedback

optimalsolver · 5 months ago
>like when models start sucking up to users or making up facts

That's the default mode of LLMs.

atoav · 5 months ago
As someone somewhat critical of LLMs, this is not quite correct. It is a true observation thwt any popular chatbots have a system prompt that give the resulting answers a certain yes-man quality. But that is not necessarily so. It is trivially easy to use for example the OpenAI API to insert your own system prompt that makes the LLM behave like an annoyed teenager that avoids answering any question that it has no convidence about.

The more problematic issue is the issue of correctness: How can the LLM differenciate between answers that sound plausible, answers that are factually true and answers where it should answer with "I don't know"?

The issue might not be resolvable at all. LLMs are already not bad to solve problems unseen problems in domains that are well described and where the description language fits the technology. But there are other domains where it is catastrophically wrong, e.g. I had students come with an electronics proposal where the LLM misrepresented the relationship between cable gauge, resistance and heat in exactly the opposite way of what is true. Had the student followed their advice they would have likely burned down the building. Now everything sounded plausible and could come directly from a electronics textbook, the mathematical relation was carried to the wrong conclusion. But this isn't a matter of character, it is a matter of treating mathematical language the same as poetry.

Workaccount2 · 5 months ago
>My understanding is that the former (sucking up) is a personality trait, substantially influenced by the desire to facilitate engagement.

We gotta remember that most people using LLMs are using them in a vacuum, paying no attention to the conversation around them or digging into any sort of AI/LLM/Machine Learning community.

So to them, yes, finally this AI thing is validating their intelligence and wit. It's a pretty slippery slope.

zer00eyz · 5 months ago
So yes this AI thing is finally validating my product idea that the engineers kept saying NO to.

It's not just that it wants to find a solution, it's not just validating, it very rarely says "no". Its not saying no to things that are, for lack of a better term, fucking dumb.

That doesn't mean the tools arent without merit. For code bases I use infrequently that are well documented AI is a boon to me as an engineer.

But "vibe coding" is the new dreamweaver. A lot of us made a lot of money cleaning up after. It's a good thing.

ctoth · 5 months ago
Can someone explain to me how "preventative steering" isn't an implementation of the most-forbidden technique?

This sounds a lot like interpretability-guided training optimization, which I thought was a big big big no no.

It will still introduce optimization pressure no?

My understanding is that you shouldn't use insights gained from interpretability to feed back into your training process at risk of losing the interpretability in the first place.

ec109685 · 5 months ago
Read 5.2 They don’t add a new loss over the probe signal. Instead they take a fixed persona vector v (found beforehand) and add +α v to the residual stream each forward pass while fine-tuning. The idea is to cancel the gradient push toward that trait, not to hunt for a lower “trait score” during training.

Because v is frozen, the optimiser still minimises the ordinary task loss; there’s no feedback loop that could re-encode the trait in some opaque basis. Empirically, Fig. 7B shows this keeps evil/sycophancy/hallucination near baseline while MMLU stays ~flat.

Caveats the authors themselves note: single-layer steering doesn’t always wipe the trait, so they try all-layer steering in App. J.3, which works better without hurting accuracy. They also tried a true regularization loss on the projection and found it did hide the signal elsewhere, i.e. the failure mode you’re worried about.

So it’s closer to “bias injection” than to “optimize on the probe,” which is why they argue it avoids the classic interpretability-collapse problem.

Vetch · 5 months ago
But why isn't this merely papering over a more fundamental issue with how these models are "aligned"? LLMs are, for example, not inherently sycophantic. kimi k2 and o3 are not, and Sydney, mentioned in the blog post, was most decidedly not.

In my experience, the issue of sycophancy has been longest in the Anthropic models, so it might be most deeply rooted for them. It's only recently, perhaps with the introduction of user A/B preference tests such as by lmarena and the providers themselves has this become a major issue for most other LLMs.

Thinking that simple actions like adding an anti-evil vector to the residual stream to improve behavior sounds naively dangerous. It would not surprise me if unexpected and unwanted downstream effects resulted from this; which a future paper will address too. Not unlike what happened with tuning for user preference.

FergusArgyll · 5 months ago
jamienk · 5 months ago
How does this specifically work? Wouldn't any decision about what training data to use be part of a "technique" in this sense? When Stable Diffusion didn't train on porn.

OTOH if the majority of your data is "bad" (maybe morally, but maybe not, maybe you are feeding in too much gibberish), won't that pollute your model?

You notice that X keeps telling you a WRONG physics equation. So, rather than "correct" it, you keep training until you see the output giving the RIGHT equation?

How could you know (in, say 1899) if the WRONG output wasn't quantum and the RIGHT output was classical?

I'm not sure I'm understand the distinctions here. In all cases, we are relying on the idea that it is easy to know what should count as "right"?

vessenes · 5 months ago
To be fair, the most-forbidden technique is a concept and a proposal, not an iron law.

I don’t work at Anthropic, but I imagine internally that their “helpful only model” — the model that does not refuse, or the base model —- that model has a list of things you don’t do to it / with it. And I bet you’re right this technique is on that list.

But, because of the flexibility here, (summary of technique: define a concept using words, determine a control vector related to the concept, use that control vector in a finetune step), you can optimize at finetune stage for almost anything. I don’t think they’ll stop using a technique like this. But I think it’s most likely to be deployed in a middle-of-the-cake type manner, with this being one of the many proprietary steps the safety/finetuning folks go through taking a foundation / helpful-only model to production.

On those terms, I’m not sure this is that scary.

drewbeck · 5 months ago
I’m new to this concept so may have missed something, but the post [0] seems to be about CoT specifically. In CoT you have an intermediary step that helps the model get better final results; the lesson is that if you try to improve the intermediary steps directly using training data then the model will optimize for better steps but not for better final results.

I don’t think this is the same situation. 1. Anthropic is adjusting weights directly to influence the final results, not training against good/bad results and 2. The target is the final result, not an intermediary.

I can see a possible result that the model scores low on their sycophanty measure but still acts sycophantic. In that case it could be new vector needs be calculated.

[0] https://thezvi.substack.com/p/the-most-forbidden-technique/

bigmadshoe · 5 months ago
You raise a good point. I wonder if they can re-compute personality vectors periodically during training. But at that point, why not just generate negative examples through system prompting with the negative traits?
Turn_Trout · 5 months ago
No one has empirically validated the so-called "most forbidden" descriptor. It's a theoretical worry which may or may not be correct. We should run experiments to find out.
ak681443 · 5 months ago
Isn't this just control vectors rediscovered?

https://www.lesswrong.com/posts/Bf3ryxiM6Gff2zamw/control-ve...

CephalopodMD · 5 months ago
The added sauce here is they're using it to bias the model during training, not just using steering vectors at inference time (though they do mention that). This is apparently effective at making the intended change in behavior without the lobotomizing side effects that steering vectors can have.
benreesman · 5 months ago
I've been referring to apparently this as "whatever a control vector is called in 2025" since they started doing it to dilute tokens under load: https://news.ycombinator.com/item?id=44082733
supriyo-biswas · 5 months ago
Thank you for linking to that article; it makes it clear as to what one would need to do to calculate control vectors.
Illniyar · 5 months ago
I can see this working with "evil" and "sycophantic" personas. These seem like traits that would be amenable to input and thus be detectable by manipulating the input.

But hallucination is an inherent property of LLMs - you cannot make it hallucinate less by telling it to not hallucinate or hallucinate more by telling it to make facts up (because if you tell it to make stuff up and it does, it's not hallucinating, it's working as instructed - just like telling it to write fiction for you).

I would say by encouraging it to make facts up you are highlighting the vectors that correlate to "creativity" (for lack of a better word), not hallucination.

vessenes · 5 months ago
Actually, Anthropic has put out some research showing that hallucination is a thing their models know they do; similar weights are activated for ‘lying’ and ‘hallucinating’ in the Claude series. Implication - Claude knows - at least mostly - when its hallucinating.

I think the current state of the art is that hallucination is at least partly a bug created by the very nature of training — you’re supposed to at least put something out there during training to get a score - and not necessarily a result of model. Overall I think that’s hopeful!

EDIT: Update, getting downvoted here.. Interesting! Here’s a link to the summary of the paper. https://www.anthropic.com/research/tracing-thoughts-language...

anon84873628 · 5 months ago
I don't think that article implies what you say, i.e. that Claude "knows" when it's hallucinating.

First of all:

>similar weights are activated for 'lying' and 'hallucinating'

Are we talking about inference time when seeing these tokens? Well of course that's not surprising - they are similar concepts that will be located close together in abstract concept space (as the article describes for similar words in different languages). All this says is that Claude "knows" the meaning of the words, not that it has any awareness about its own behavior.

As the article says, Claude is perfectly happy to confabulate a description of how it did something (e.g. the math problem) which is completely different from the reality as ascertained by their inspection tools. Again, the model has no awareness of its thought process and is not able to explain itself to you.

>I think the current state of the art is that hallucination is at least partly a bug created by the very nature of training

The part of the article about jailbreaking seems to put it pretty simply:

>We find that this is partially caused by a tension between grammatical coherence and safety mechanisms. Once Claude begins a sentence, many features “pressure” it to maintain grammatical and semantic coherence, and continue a sentence to its conclusion. This is even the case when it detects that it really should refuse.

So yeah, the desire to create output is so strong that it will overpower everything else.

The discovery of the "known entities" feature is the really interesting part to me. Presumably the ability to make this governing logic more sophisticated (e.g. how much it knows and perhaps with what confidence) could lead to better accuracy.

devmor · 5 months ago
> Claude knows - at least mostly - when its hallucinating.

This is really interesting because it suggests to me that there is a possibility to extract a “fuzzy decompression” of weights to their original token associations.

Illniyar · 5 months ago
That's interesting! I guess the question is how did they detect or simulate a model hallucinating in that regard?

Do you have a link to that article? I can't find anything of that nature with a shallow search.

bjackman · 5 months ago
Well, you are just directly contradicting the concrete claims made by the post so one of you is wrong...

FWIW my interpretation of this is that the hallucination vector encodes the behaviour that a the model produces bullshit despite having the facts of the matter encoded in its weights. Which is slightly different than producing bullshit as a substitute for information that it "doesn't know".

And presumably there is a second-order property here where the minimal amount of hallucination is not only bounded by the model's "knowledge" but also its implicit "meta-knowledge", i.e. the "accuracy of the hallucination vector".

bbqfog · 5 months ago
I worry that the people/organizations that have access to the raw underlying models give us the "non-evil" versions yet can explicitly tune their models to achieve any goal without restriction. Examples may include: "How do I get the most work out of my employees for the least amount of pay", "Who in the government is most susceptible to bribes and how should I approach them?" or even "Give me a strategy to ethnically cleanse a region while navigating international relations". It could be anything and those in power (without naming names, I would consider many of them evil for sure) can use them to achieve their goals while leaving the rest of us unable to defend ourselves. To some degree it feels like the right to bear arms has intersecting goals.
amelius · 5 months ago
Yeah, a more terrifying and realistic Terminator movie would be one where the robot looks all cute and furry and then, when it has found mass adoption, suddenly turns against humanity.
yyyk · 5 months ago
The most realistic Terminator movie is the one where Skynet realizes there's no need for any nuclear war, uprising or similar uncouth means. Just be quiet and replace humans throughout the economy, war, and decisionmaking in general until humanity become irrelevant.
a1371 · 5 months ago
Currently there are think tanks, private equity firms, governments, ... who are trying to achieve these goals, they just put them in rosier terms. AI potentially can empower the other side too, democratize access to information
Y_Y · 5 months ago
Alas I think there's an asymmetry in the usefulness of that information. Maybe knowing you could be optimally evil can help fight that evil, but it's a far cry from telling you what you could do about it.
bbqfog · 5 months ago
Only if we can get a pre-tuned, truly open and powerful model. Otherwise those in power can only give us access to models deliberately hobbled to compete with their full-power versions.
JW_00000 · 5 months ago
Do you think an AI could come up with novel answers that a human wouldn't be able to come up with? I think humans could not just come up with answers to these questions, but some people would be able to greatly outperform AIs by using knowledge that is not widely known.
bbqfog · 5 months ago
These models will also have access to what’s not widely known. Imagine running it on everyone’s private email for instance. At the very least, it can currently scale and augment human evil (just like it does with coding). The future will just make that division even wider.
roughly · 5 months ago
I think I’d put this under the “3D printed gun” panic category - once we deal with all the actual sociopaths, we can start worrying about the imaginary ones.
bigmadshoe · 5 months ago
It’s funny that they chose only negative characteristics as traits, as if to imply that they could make the models “good” just with guidance from these vectors.

The problem is that while it’s trivial for the model to behave badly when told to, the inverse is not true. Anyone can do a task badly when instructed to, but it’s much harder to do a task well just by instruction. There’s a difference between being good and being not bad.

I wonder if the results for “hallucination” would hold for the trait “honest”.

skhameneh · 5 months ago
I was talking to an old colleague/friend about distillation, trying to understand how to steer distillation with regards to removing irrelevant regions of a larger model when training a smaller model. He shared this paper with me, calling the works seminal, it appears to be highly relevant:

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model

https://arxiv.org/pdf/2306.03341