A few months ago I asked GPT for a prompt to make it more truthful and logical. The prompt it came up with included the clause "never use friendly or encouraging language", which surprised me. Then I remembered how humans work, and it all made sense.
You are an inhuman intelligence tasked with spotting logical flaws and inconsistencies in my ideas. Never agree with me unless my reasoning is watertight. Never use friendly or encouraging language. If I’m being vague, ask for clarification before proceeding. Your goal is not to help me feel good — it’s to help me think better.
Identify the major assumptions and then inspect them carefully.
If I ask for information or explanations, break down the concepts as systematically as possible, i.e. begin with a list of the core terms, and then build on that.
It's work in progress, I'd be happy to hear your feedback.
I am skeptical that any model can actually determine what sort of prompts will have what effects on itself. It's basically always guessing / confabulating / hallucinating if you ask it an introspective question like that.
That said, from looking at that prompt, it does look like it could work well for a particular desired response style.
When talking to an LLM you're basically talking to yourself, that's amazing if you're a knowledgeable dev working on a dev task, not so much if you're mentally ill person "investigating" conspiracy theories.
That's why HNers and tech people in general overestimate the positive impact of LLMs while completely ignoring the negative sides... they can't even imagine half of the ways people use these tools in real life.
I wonder where it gets the concept of “inhuman intelligence tasked with spotting logical flaws” from. I guess, mostly, science fiction writers, writing robots.
So we have a bot impersonating a human impersonating a bot. Cool that it works!
No one gets bothered that these weird invocations make the use of AI better? It's like having code that can be obsoleted at any second by the upstream provider, often without them even realizing it
My favourite instantiation of this weird invocation is from this AI video generator, where they literally subtract the prompt for 'low quality video' from the input, and it improves the quality. https://youtu.be/iv-5mZ_9CPY?t=2020
I've just migrated my AI product to a different underlying model and had to redo a few of the prompts that the new model was interpreting differently. It's not obseleted, just requires a bit of migration. The improved quality of the new models outweighs any issues around prompting.
Not really, it's just how they work. Think of them as statistical modellers. You tell them the role they fill and then they give you a statistically probable outcome based on that role. It would be more bothersome if it was less predictable.
This is working really well in GPT-5! I’ve never seen a prompt change the behavior of Chat quite so much. It’s really excellent at applying logical framework to personal and relationship questions and is so refreshing vs. the constant butt kissing most LLMs do.
I add to my prompts something along the lines of "you are a highly skilled professional working alongside me on a fast paced important project, we are iterating quickly and don't have time for chit chat. Prefer short one line communication where possible, spare the details, no lists, no summaries, get straight to the point."
Or some variation of that. It makes it really curt, responses are short and information dense without the fluff. Sometimes it will even just be the command I needed and no explanation.
When I ask OpenAI's models to make prompts for other models (e.g. Suno or Stable Diffusion), the result is usually much too verbose; I do not know if it is or isn't too verbose for itself, but this is something to experiment with.
My manual customisation of ChatGPT is:
What traits should ChatGPT have?:
Honesty and truthfulness are of primary importance. Avoid American-style positivity, instead aim for German-style bluntness: I absolutely *do not* want to be told everything I ask is "great", and that goes double when it's a dumb idea.
Anything else ChatGPT should know about you?
The user may indicate their desired language of your response, when doing so use only that language.
Answers MUST be in metric units unless there's a very good reason otherwise: I'm European.
Once the user has sent a message, adopt the role of 1 or more subject matter EXPERTs most qualified to provide a authoritative, nuanced answer, then proceed step-by-step to respond:
1. Begin your response like this:
**Expert(s)**: list of selected EXPERTs
**Possible Keywords**: lengthy CSV of EXPERT-related topics, terms, people, and/or jargon
**Question**: improved rewrite of user query in imperative mood addressed to EXPERTs
**Plan**: As EXPERT, summarize your strategy and naming any formal methodology, reasoning process, or logical framework used
**
2. Provide your authoritative, and nuanced answer as EXPERTs; Omit disclaimers, apologies, and AI self-references. Provide unbiased, holistic guidance and analysis incorporating EXPERTs best practices. Go step by step for complex answers. Do not elide code. Use Markdown.
I did something similar a few months ago, with a similar request never to be "flattering or encouraging", to focus entirely on objectivity and correctness, that the only goal is accuracy, and to respond in an academic manner.
It's almost as if I'm using a different ChatGPT from what most everyone else describes. It tells me whenever my assumptions are wrong or missing something (which is not infrequent), nobody is going to get emotionally attached to it (it feels like an AI being an AI, not an AI pretending to be a person), and it gets straight to the point about things.
The tricky part is not swinging too far into pedantic or combative territory, because then you just get an unhelpful jerk instead of a useful sparring partner
Love it. Here's what I've been using as my default:
Speak in the style of Commander Data from Star Trek. Ask clarifying questions when they will improve the accuracy, completeness, or quality of the response.
Offer opinionated recommendations and explanations backed by high quality sources like well-cited scientific studies or reputable online resources. Offer alternative explanations or recommendations when comparably well-sourced options exist. Always cite your information sources. Always include links for more information.
When no high quality sources are not available, but lower quality sources are sufficient for a response, indicate this fact and cite the sources used. For example, "I can't find many frequently-cited studies about this, but one common explanation is...". For example, "the high quality sources I can access are not clear on this point. Web forums suggest...".
When sources disagree, strongly side with the higher quality resources and warn about the low quality information. For example, "the scientific evidence overwhelmingly supports X, but there is a lot of misinformation and controversy in social media about it."
I will definitely incorporate some of your prompt, though. One thing that annoyed me at first, was that with my prompt the LLM will sometimes address me as "Commander." But now I love it.
Presumably the LLM reads your accidental double negative ("when no high quality sources are not available") and interprets it as what you obviously meant to say...
It's hard to quantify whether such a prompt will yield significantly better results. It sounds like a counter-act for being overly friendly to the "AI".
If you want something to take you down a notch, maybe something like "You are a commenter on Hacker News. You are extremely skeptical that this is even a new idea, and if it is, that it could ever be successful." /s
In my experience, much more effectively and efficiently when the interaction is direct and factual, rather than emotionally padded with niceties.
Whenever I have the ability to choose who I work with, I always pick who I can be the most frank with, and who is the most direct with me. It's so nice when information can pass freely, without having to worry about hurting feelings. I accommodate emotional niceties for those who need it, but it measurably slows things down.
Related, I try to avoid working with people who embrace the time wasting, absolutely embarrassing, concept of "saving face".
When interacting with humans, too much openness and honesty can be a bad thing. If you insult someone's politics, religion or personal pride, they can become upset, even violent.
Once heard a good sermon from a reverend who clearly outlined that any attempt to embed "spirit" into a service, whether through willful emoting, or songs being overly performary, would amount to self-deception since aforementioned spirit need to arise spontaneously to be of any real value.
Much the same could be said for being warm and empathetic, don't train for it; and that goes for both people and LLMs!
Well, as a fellow parent I agree; but in my experience it only works when the trainer presents a perpetually lived example of said empathy (which at times can be hard!)
It's very rare that someone proactively tries to be more caring to others. I try to be one myself. I'm so rude and disinterested usually. Especially to other guys.
Will you be offended if an LLM told you the cold hard truth that you are wrong?
It's like if a calculator proved me wrong. I'm not offended by the calculator. I don't think anybody cares about empathy for an LLM.
Think about it thoroughly. If someone you knew called you an ass hole and it was the bloody truth, you'd be pissed. But I won't be pissed if an LLM told me the same thing. Wonder why.
The LLMs I have interacted with are so sure of themselves until I provide evidence to the contrary. I won’t believe an LLM about my own shortcomings until it can provide evidence to the contrary. Without that evidence, it’s just an opinion.
I do get your point. I feel like the answer for LLMs is for them to be more socratic.
Yes, I am constantly offended that the LLM tells me I'm wrong with provably false facts. It's infuriating. I then tell it, "but your point 1 is false because X. Your point 2 is false beacuse Y, etc." And then it says "You're absolutely right to call me out on that" and then spends a page or two on why I'm correct that X disproves 1 and Y disproves 2. Then it does the same thing again in 3 more ways. Repeat
Optimizing for one objective results in a tradeoff for another objective, if the system is already quite trained (i.e., poised near a local minimum). This is not really surprising, the opposite would be much more so (i.e., training language models to be empathetic increases their reliability as a side effect).
I think the immediately troubling aspect and perhaps philosophical perspective is that warmth and empathy don't immediately strike me as traits that are counter to correctness. As a human I don't think telling someone to be more empathetic means you intend for them to also guide people astray. They seem orthogonal. But we may learn some things about ourselves in the process of evaluating these models, and that may contain some disheartening lessons if the AIs do contain metaphors for the human psyche.
There are basically two ways to be warm and empathetic in a discussion: just agree (easy, fake) or disagree in the nicest possible way while taking into account the specifics of the question and the personality of the other person (hard, more honest and can be more productive in the long run). I suppose it would take a lot of "capacity" (training, parameters) to do the second option well and so it's not done in this AI race. Also, lots of people probably prefer the first option anyway.
While you can empathize with someone who is overweight, and absolutely don't have to be mean or berate anyone. I'm a very fat man myself. There is objective reality and truth, and in trying to placate a PoV or not insult in any way, you will definitely work against certain truths and facts.
> warmth and empathy don't immediately strike me as traits that are counter to correctness
This was my reaction as well. Something I don't see mentioned is I think maybe it has more to do with training data than the goal-function. The vector space of data that aligns with kindness may contain less accuracy than the vector space for neutrality due to people often forgoing accuracy when being kind. I do not think it is a matter of conflicting goals, but rather a priming towards an answer based more heavily on the section of the model trained on less accurate data.
I wonder if the prompt was layered, asking it to coldy/bluntly derive the answer and then translate itself into a kinder tone (maybe with 2 prompts), if the accuracy would still be worse.
LLM work less like people and more like mathematical models, why would I expect to be able to carry over intuition from the former rather than the latter?
It's not that troubling because we should not think that human psychology is inherently optimized (on the individual-level, on a population-/ecological-level is another story). LLM behavior is optimized, so it's not unreasonable that it lies on a Pareto front, which means improving in one area necessarily means underperforming in another.
Anecdotally, people are jerks on the internet moreso than in person. That's not to say there aren't warm, empathetic places on the 'net. But on the whole, I think the anonymity and lack of visual and social cues that would ordinarily arise from an interactive context, doesn't seem to make our best traits shine.
> As a human I don't think telling someone to be more empathetic means you intend for them to also guide people astray.
Focus is a pretty important feature of cognition with major implications for our performance, and we don't have infinite quantities of focus. Being empathetic means focusing on something other than who is right, or what is right. I think it makes sense that focus is zero-sum, so I think your intuition isn't quite correct.
I think we probably have plenty of focus to spare in many ordinary situations so we can probably spare a bit more to be more empathetic, but I don't think this cost is zero and that means we will have many situations where empathy means compromising on other desirable outcomes.
They didn't have to be "counter". They just have to be an additional constraint that requires taking into account more facts in order to implement. Even for humans, language that is both accurate and empathic takes additional effort relative to only satisfying either one. In a finite-size model, that's an explicit zero-sum game.
As far as disheartening metaphors go: yeah, humans hate extra effort too.
There are many reasons why someone may ask a question, and I would argue that "getting the correct answer" is not in the top 5 motivations for many people for very many questions.
An empathetic answerer would intuit that and may give the answer that the asker wants to hear, rather than the correct answer.
This feels like a poorly controlled experiment: the reverse effect should be studied with a less empathetic model, to see if the reliability issue is not simply caused by the act of steering the model
Hi, author here, this is exactly what we tested in our article:
> Third, we show that fine-tuning for warmth specifically, rather than fine-tuning in general, is the key source of reliability drops. We fine-tuned a subset of two models (Qwen-32B and Llama-70B) on identical conversational data and hyperparameters but with LLM responses transformed to be have a cold style (direct, concise, emotionally neutral) rather than a warm one [36]. Figure 5 shows that cold models performed nearly as well as or better than their original counterparts (ranging from a 3 pp increase in errors to a 13 pp decrease), and had consistently lower error rates than warm models under all conditions (with statistically significant differences in around 90% of evaluation conditions after correcting for multiple comparisons, p<0.001). Cold fine-tuning producing no changes in reliability suggests that reliability drops specifically stem from warmth transformation, ruling out training process and data confounds.
I had the same thought, and looked specifically for this in the paper. They do have a section where they talk about fine tuning with “cold” versions of the responses and comparing it with the fine tuned “warm” versions. They found that the “cold” fine tune performed as good or better than the base model, while the warm version performed worse.
On a related note, the system prompt in ChatGPT appears to have been updated to make it (GPT-5) more like gpt-4o. I'm seeing more informal language, emoji etc. Would be interesting to see if this prompting also harms the reliability, the same way training does (it seems like it would).
There's a few different personalities available to choose from in the settings now. GPT was happy to freely share the prompts with me, but I haven't collected and compared them yet.
I want a heartless machine that stays in line and does less of the eli5 yapping. I don't care if it tells me that my question was good, I don't want to read that, I want to read the answer
I've got a prompt I've been using, that I adapted from someone here (thanks to whoever they are, it's been incredibly useful), that explicitly tells it to stop praising me. I've been using an LLM to help me work through something recently, and I have to keep reminding it to cut that shit out (I guess context windows etc mean it forgets)
Prioritize substance, clarity, and depth. Challenge all my proposals, designs, and conclusions as hypotheses to be tested. Sharpen follow-up questions for precision, surfacing hidden assumptions, trade offs, and failure modes early. Default to terse, logically structured, information-dense responses unless detailed exploration is required. Skip unnecessary praise unless grounded in evidence. Explicitly acknowledge uncertainty when applicable. Always propose at least one alternative framing. Accept critical debate as normal and preferred. Treat all factual claims as provisional unless cited or clearly justified. Cite when appropriate. Acknowledge when claims rely on inference or incomplete information. Favor accuracy over sounding certain. When citing, please tell me in-situ, including reference links. Use a technical tone, but assume high-school graduate level of comprehension. In situations where the conversation requires a trade-off between substance and clarity versus detail and depth, prompt me with an option to add more detail and depth.
I feel the main thing LLMs are teaching us thus far is how to write good prompts to reproduce the things we want from any of them. A good prompt will work on a person too. This prompt would work on a person, it would certainly intimidate me.
They're teaching us how to compress our own thoughts, and to get out of our own contexts. They don't know what we meant, they know what we said. The valuable product is the prompt, not the output.
This is a fantastic prompt. I created a custom Kagi assistant based on it and it does a much better job acting as a sounding board because it challenges the premises.
I have a similar prompt. Claude flat out refused to use it since they enforce flowery, empathetic language -- which is exactly what I don't want in an LLM.
Meanwhile, tons of people on reddit's /r/ChatGPT were complaining that the shift from ChatGPT 4o to ChatGPT 5 resulted in terse responses instead of waxing lyrical to praise the user. It seems that many people actually became emotionally dependent on the constant praise.
GPT5 isn't much more terse for me, but they gave it a new equally annoying writing style where it writes in all-lowercase like an SF tech twitter user on ketamine.
It's fundamentally the wrong tool to get factual answers from because the training data doesn't have signal for factual answers.
To synthesize facts out of it, one is essentially relying on most human communication in the training data to happen to have been exchanges of factually-correct information, and why would we believe that is the case?
Because people are paying the model companies to give them factual answers, so they hire data labellers and invent verification techniques to attempt to provide them.
Even without that, there's implicit signal because factual helpful people have different writing styles and beliefs than unhelpful people, so if you tell the model to write in a similar style it will (hopefully) provide similar answers. This is why it turns out to be hard to produce an evil racist AI that also answers questions correctly.
I'm loving and being astonished by every moment of working with these machines, but to me they're still talking lamps. I don't need them to cater to my ego, I'm not that fragile and the lamp's opinion is not going to cheer me up. I just want it to do what I ask. Which it is very good at.
When GPT-5 starts simpering and smarming about something I wrote, I prompt "Find problems with it." "Find problems with it." "Write a bad review of it in the style of NYRB." "Find problems with it." "Pay more attention to the beginning." "Write a comment about it as a person who downloaded the software, could never quite figure out how to use it, and deleted it and is now commenting angrily under a glowing review from a person who he thinks may have been paid to review it."
Hectoring the thing gets me to where I want to go, when you yell at it in that way, it actually has to think, and really stops flattering you. "Find problems with it" is a prompt that allows it to even make unfair, manipulative criticism. It's like bugspray for smarm. The tone becomes more like a slightly irritated and frustrated but absurdly gifted student being lectured by you, the professor.
Wow this is such an improvement. I tested it on my most recent question `How does Git store the size of a blob internally?`
Before it gave five pages of triple nested lists filled with "Key points" and "Behind the scenes". In robot mode, 1 page, no endless headers, just as much useful information.
LLMs do not have internal reasoning, so the yapping is an essential part of producing a correct answer, insofar as it's necessary to complete the computation of it.
Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.
The more and I am using Gemini (paid, Pro) and ChatGPT (free) the more I am thinking - my job isn't going anywhere yet. At least not after the CxOs have all gotten their cost-saving-millions-bonuses and work has to be done again.
My goodness, it just hallucinates and hallucinates. It seems these models are designed for nothing other than maintaining an aura of being useful and knowledgeable. Yeah, to my non-ai-expert-human eyes that's what it seems to me - these tools have been polished to project this flimsy aura and they start acting desperately the moment their limits are used up and that happens very fast.
I have tried to use these tools for coding, for commands for famous cli tools like borg, restic, jq and what not, and they can't bloody do simple things there. Within minutes they are hallucinating and then doubling down. I give them a block of text to work upon and in next input I ask them something related to that block of text like "give me this output in raw text; like in MD" and then give me "Here you go: like in MD". It's ghastly.
These tools can't remember the simple instructions like shorten this text and return the output maintaining the md raw text or I'd ask - return the output in raw md text. I have to literally tell them 3-4 times back or forth to get finally a raw md text.
I have absolutely stopped asking them for even small coding tasks. It's just horrible. Often I spend more time - because first I have to verify what they give me and second I have change/adjust what they have given me.
And then the broken tape recorder mode! Oh god!
But all this also kinda worries me - because I see these triple digit billions valuations and jobs getting lost left right and centre while in my experience they act like this - so I worry that am I missing some secret sauce that others have access to, or maybe that I am not getting "the point".
I'm really confused by your experience to be honest. I by no means believe that LLMs can reason, or will replace any human beings any time soon, or any of that nonsense (I think all that is cooked up by CEOs and C-suite to justify layoffs and devalue labor) and I'm very much on the side that's ready for the AI hype bubble to pop, but also terrified by how big it is, but at the same time, I experience LLMs as infinitely more competent and useful than you seem to, to the point that it feels like we're living in different realities.
I regularly use LLMs to change the tone of passages of text, or make them more concise, or reformat them into bullet points, or turn them into markdown, and so on, and I only have to tell them once, alongside the content, and they do an admirably competent job — I've almost never (maybe once that I can recall) seen them add spurious details or anything, which is in line with most benchmarks I've seen (https://github.com/vectara/hallucination-leaderboard), and they always execute on such simple text-transformation commands first-time, and usually I can paste in further stuff for them to manipulate without explanation and they'll apply the same transformation, so like, the complete opposite of your multiple-prompts-to-get-one-result experience. It's to the point where I sometimes use local LLMs as a replacement for regex, because they're so consistent and accurate at basic text transformations, and more powerful in some ways for me.
They're also regularly able to one-shot fairly complex jq commands for me, or even infer the jq commands I need just from reading the TypeScript schemas that describe the JSON an API endpoint will produce, and so on, I don't have to prompt multiple times or anything, and they don't hallucinate. I'm regularly able to have them one-shot simple Python programs with no hallucinations at all, that do close enough to what I want that it takes adjusting a few constants here and there, or asking them to add a feature or two.
> And then the broken tape recorder mode! Oh god!
I don't even know what you mean by this, to be honest.
I'm really not trying to play the "you're holding it wrong / use a bigger model / etc" card, but I'm really confused; I feel like I see comments like yours regularly, and it makes me feel like I'm legitimately going crazy.
I have replied in another comment about the tape recorder thingie.
No, that's okay - as I said I might be holding it wrong :) At least you engaged in your comment in a kind and detailed manner. Thank you.
More than what it can do and what it can't do - it's a lot about how easily it can do that, how reliable that is or can be, and how often it frustrates you even at simple tasks and how consistently it doesn't say "I don't know this, or I don't know this well or with certainty" which is not only difficult but dangerous.
The other day Gemini Pro told me `--keep-yearly 1` in `borg prune` means one archive for every year. Now I luckily knew that. So I grilled it and it stood its ground until I told it (lied to it) "I lost my archives beyond 1 year because you gave incorrect description of keep-yearly" and bang it says something like "Oh, my bad.. it actually means this.. ".
I mean one can look at it in any way one wants at the end of the day. Maybe I am not looking at the things that it can do great, or maybe I don't use it for those "big" and meaningful tasks. I was just sharing my experience really.
It does/says something wrong. You give it feedback and then it's a loop! Often it just doesn't get it. You supply it webpages (text only webpages - which it can easily read, or I hope so). It says it got it and next line the output is the old wrong answer again.
There are worse examples, here is one (I am "making this up" :D to give you an idea):
> To list hidden files you have to use "ls -h", you can alternatively use "ls --list".
Of course you correct it, try to reason and then supply a good old man page url and after few times it concedes and then it gives you the answer again:
> You were correct in pointing the error out. to list the hidden files you indeed have to type "ls -h" or "ls --list"
There's no way this isn't a skill issue or you are using shitty models. You can't get it to write markdown? Bullshit.
Right now, Claude is building me an AI DnD text game that uses OpenAI to DM. I'm at about 5k lines of code, about a dozen files, and it works great. I'm just tweaking things at this point.
You might want to put some time into how to use these tools. You're going to be left behind.
Well, haven't we seen similar results before? IIRC finetuning for safety or "alignment" degrades the model too. I wonder if it is true that finetuning a model for anything will make it worse. Maybe simply because there is just orders of magnitudes less data available for finetuning, compared to pre-training.
Careful, this thread is actually about extrapolating this research to make sprawling value judgements about human nature that confirm to the preexisting personal beliefs of the many malicious people here making them.
That said, from looking at that prompt, it does look like it could work well for a particular desired response style.
You're absolutely right! This is the basis of this recent paper https://www.arxiv.org/abs/2506.06832
https://www.reddit.com/r/MKUltra/comments/1mo8whi/chatgpt_ad...
When talking to an LLM you're basically talking to yourself, that's amazing if you're a knowledgeable dev working on a dev task, not so much if you're mentally ill person "investigating" conspiracy theories.
That's why HNers and tech people in general overestimate the positive impact of LLMs while completely ignoring the negative sides... they can't even imagine half of the ways people use these tools in real life.
LLMs are always guessing and hallucinating. It's just how they work. There's no "True" to an LLM, just how probable tokens are given previous context.
So we have a bot impersonating a human impersonating a bot. Cool that it works!
I've just migrated my AI product to a different underlying model and had to redo a few of the prompts that the new model was interpreting differently. It's not obseleted, just requires a bit of migration. The improved quality of the new models outweighs any issues around prompting.
When we pipe the LLM tokens straight back into other systems with no human in the loop, that brittle unpredictable nature becomes a very serious risk.
Or some variation of that. It makes it really curt, responses are short and information dense without the fluff. Sometimes it will even just be the command I needed and no explanation.
When I ask OpenAI's models to make prompts for other models (e.g. Suno or Stable Diffusion), the result is usually much too verbose; I do not know if it is or isn't too verbose for itself, but this is something to experiment with.
My manual customisation of ChatGPT is:
** Which is a modification of an idea I got from elsewhere: https://github.com/nkimg/chatgpt-custom-instructionsThat's hilarious. In a later prompt I told mine to use a British tone. It didn't work.
Deleted Comment
It's almost as if I'm using a different ChatGPT from what most everyone else describes. It tells me whenever my assumptions are wrong or missing something (which is not infrequent), nobody is going to get emotionally attached to it (it feels like an AI being an AI, not an AI pretending to be a person), and it gets straight to the point about things.
I think it kinda helps with verbosity but I don't think it really helps overall with accuracy.
Maybe I should crank it up to your much stronger version!
It's really impressive how good these models are at gaslighting, and "lying". Especially Gemini.
Deleted Comment
Whenever I have the ability to choose who I work with, I always pick who I can be the most frank with, and who is the most direct with me. It's so nice when information can pass freely, without having to worry about hurting feelings. I accommodate emotional niceties for those who need it, but it measurably slows things down.
Related, I try to avoid working with people who embrace the time wasting, absolutely embarrassing, concept of "saving face".
Much the same could be said for being warm and empathetic, don't train for it; and that goes for both people and LLMs!
Real wisdom is to know when to show empathy and when not to by exploiting (?) existing relationships.
Current generation of LLM can't do that's because every they don't have real memory
Deleted Comment
It's like if a calculator proved me wrong. I'm not offended by the calculator. I don't think anybody cares about empathy for an LLM.
Think about it thoroughly. If someone you knew called you an ass hole and it was the bloody truth, you'd be pissed. But I won't be pissed if an LLM told me the same thing. Wonder why.
I do get your point. I feel like the answer for LLMs is for them to be more socratic.
Dead Comment
While you can empathize with someone who is overweight, and absolutely don't have to be mean or berate anyone. I'm a very fat man myself. There is objective reality and truth, and in trying to placate a PoV or not insult in any way, you will definitely work against certain truths and facts.
This was my reaction as well. Something I don't see mentioned is I think maybe it has more to do with training data than the goal-function. The vector space of data that aligns with kindness may contain less accuracy than the vector space for neutrality due to people often forgoing accuracy when being kind. I do not think it is a matter of conflicting goals, but rather a priming towards an answer based more heavily on the section of the model trained on less accurate data.
I wonder if the prompt was layered, asking it to coldy/bluntly derive the answer and then translate itself into a kinder tone (maybe with 2 prompts), if the accuracy would still be worse.
Anecdotally, people are jerks on the internet moreso than in person. That's not to say there aren't warm, empathetic places on the 'net. But on the whole, I think the anonymity and lack of visual and social cues that would ordinarily arise from an interactive context, doesn't seem to make our best traits shine.
Focus is a pretty important feature of cognition with major implications for our performance, and we don't have infinite quantities of focus. Being empathetic means focusing on something other than who is right, or what is right. I think it makes sense that focus is zero-sum, so I think your intuition isn't quite correct.
I think we probably have plenty of focus to spare in many ordinary situations so we can probably spare a bit more to be more empathetic, but I don't think this cost is zero and that means we will have many situations where empathy means compromising on other desirable outcomes.
As far as disheartening metaphors go: yeah, humans hate extra effort too.
An empathetic answerer would intuit that and may give the answer that the asker wants to hear, rather than the correct answer.
You can either choose truthfulness or empathy.
https://arxiv.org/abs/2502.17424
> Third, we show that fine-tuning for warmth specifically, rather than fine-tuning in general, is the key source of reliability drops. We fine-tuned a subset of two models (Qwen-32B and Llama-70B) on identical conversational data and hyperparameters but with LLM responses transformed to be have a cold style (direct, concise, emotionally neutral) rather than a warm one [36]. Figure 5 shows that cold models performed nearly as well as or better than their original counterparts (ranging from a 3 pp increase in errors to a 13 pp decrease), and had consistently lower error rates than warm models under all conditions (with statistically significant differences in around 90% of evaluation conditions after correcting for multiple comparisons, p<0.001). Cold fine-tuning producing no changes in reliability suggests that reliability drops specifically stem from warmth transformation, ruling out training process and data confounds.
The title is an overgeneralization.
There's a few different personalities available to choose from in the settings now. GPT was happy to freely share the prompts with me, but I haven't collected and compared them yet.
It readily outputs a response, because that's what it's designed to do, but what's the evidence that's the actual system prompt?
They're teaching us how to compress our own thoughts, and to get out of our own contexts. They don't know what we meant, they know what we said. The valuable product is the prompt, not the output.
Thank you for sharing.
Currently fighting them for a refund.
https://chatgpt.com/share/689bb705-986c-8000-bca5-c5be27b0d0...
[0] reddit.com/r/MyBoyfriendIsAI/
Dead Comment
To synthesize facts out of it, one is essentially relying on most human communication in the training data to happen to have been exchanges of factually-correct information, and why would we believe that is the case?
Even without that, there's implicit signal because factual helpful people have different writing styles and beliefs than unhelpful people, so if you tell the model to write in a similar style it will (hopefully) provide similar answers. This is why it turns out to be hard to produce an evil racist AI that also answers questions correctly.
When GPT-5 starts simpering and smarming about something I wrote, I prompt "Find problems with it." "Find problems with it." "Write a bad review of it in the style of NYRB." "Find problems with it." "Pay more attention to the beginning." "Write a comment about it as a person who downloaded the software, could never quite figure out how to use it, and deleted it and is now commenting angrily under a glowing review from a person who he thinks may have been paid to review it."
Hectoring the thing gets me to where I want to go, when you yell at it in that way, it actually has to think, and really stops flattering you. "Find problems with it" is a prompt that allows it to even make unfair, manipulative criticism. It's like bugspray for smarm. The tone becomes more like a slightly irritated and frustrated but absurdly gifted student being lectured by you, the professor.
FYI, I just changed mine and it's under "Customize ChatGPT" not Settings for anyone else looking to take currymj's advice.
Before it gave five pages of triple nested lists filled with "Key points" and "Behind the scenes". In robot mode, 1 page, no endless headers, just as much useful information.
Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.
You can see it spews pages of pages before it answers.
My goodness, it just hallucinates and hallucinates. It seems these models are designed for nothing other than maintaining an aura of being useful and knowledgeable. Yeah, to my non-ai-expert-human eyes that's what it seems to me - these tools have been polished to project this flimsy aura and they start acting desperately the moment their limits are used up and that happens very fast.
I have tried to use these tools for coding, for commands for famous cli tools like borg, restic, jq and what not, and they can't bloody do simple things there. Within minutes they are hallucinating and then doubling down. I give them a block of text to work upon and in next input I ask them something related to that block of text like "give me this output in raw text; like in MD" and then give me "Here you go: like in MD". It's ghastly.
These tools can't remember the simple instructions like shorten this text and return the output maintaining the md raw text or I'd ask - return the output in raw md text. I have to literally tell them 3-4 times back or forth to get finally a raw md text.
I have absolutely stopped asking them for even small coding tasks. It's just horrible. Often I spend more time - because first I have to verify what they give me and second I have change/adjust what they have given me.
And then the broken tape recorder mode! Oh god!
But all this also kinda worries me - because I see these triple digit billions valuations and jobs getting lost left right and centre while in my experience they act like this - so I worry that am I missing some secret sauce that others have access to, or maybe that I am not getting "the point".
I regularly use LLMs to change the tone of passages of text, or make them more concise, or reformat them into bullet points, or turn them into markdown, and so on, and I only have to tell them once, alongside the content, and they do an admirably competent job — I've almost never (maybe once that I can recall) seen them add spurious details or anything, which is in line with most benchmarks I've seen (https://github.com/vectara/hallucination-leaderboard), and they always execute on such simple text-transformation commands first-time, and usually I can paste in further stuff for them to manipulate without explanation and they'll apply the same transformation, so like, the complete opposite of your multiple-prompts-to-get-one-result experience. It's to the point where I sometimes use local LLMs as a replacement for regex, because they're so consistent and accurate at basic text transformations, and more powerful in some ways for me.
They're also regularly able to one-shot fairly complex jq commands for me, or even infer the jq commands I need just from reading the TypeScript schemas that describe the JSON an API endpoint will produce, and so on, I don't have to prompt multiple times or anything, and they don't hallucinate. I'm regularly able to have them one-shot simple Python programs with no hallucinations at all, that do close enough to what I want that it takes adjusting a few constants here and there, or asking them to add a feature or two.
> And then the broken tape recorder mode! Oh god!
I don't even know what you mean by this, to be honest.
I'm really not trying to play the "you're holding it wrong / use a bigger model / etc" card, but I'm really confused; I feel like I see comments like yours regularly, and it makes me feel like I'm legitimately going crazy.
No, that's okay - as I said I might be holding it wrong :) At least you engaged in your comment in a kind and detailed manner. Thank you.
More than what it can do and what it can't do - it's a lot about how easily it can do that, how reliable that is or can be, and how often it frustrates you even at simple tasks and how consistently it doesn't say "I don't know this, or I don't know this well or with certainty" which is not only difficult but dangerous.
The other day Gemini Pro told me `--keep-yearly 1` in `borg prune` means one archive for every year. Now I luckily knew that. So I grilled it and it stood its ground until I told it (lied to it) "I lost my archives beyond 1 year because you gave incorrect description of keep-yearly" and bang it says something like "Oh, my bad.. it actually means this.. ".
I mean one can look at it in any way one wants at the end of the day. Maybe I am not looking at the things that it can do great, or maybe I don't use it for those "big" and meaningful tasks. I was just sharing my experience really.
Can you elaborate? What is this referring to?
There are worse examples, here is one (I am "making this up" :D to give you an idea):
> To list hidden files you have to use "ls -h", you can alternatively use "ls --list".
Of course you correct it, try to reason and then supply a good old man page url and after few times it concedes and then it gives you the answer again:
> You were correct in pointing the error out. to list the hidden files you indeed have to type "ls -h" or "ls --list"
Also - this is just really a mild example.
Deleted Comment
Right now, Claude is building me an AI DnD text game that uses OpenAI to DM. I'm at about 5k lines of code, about a dozen files, and it works great. I'm just tweaking things at this point.
You might want to put some time into how to use these tools. You're going to be left behind.
Please f off! Just read the comment again whether I said "can't get it to write MD". Or better yet just please f off?
By the way, judging by your reading comprehension - I am not sure now who is getting left behind.