Readit News logoReadit News
trjordan · 4 months ago
We've been building out our agent [0], and we've found this to be the case.

We actually dialed it back a bunch, because it feels _terrible_. Yes, you get more correct answers, but it's more akin to giving the agent anxiety. Especially with agents that have access to tools, they'll burn enormous amounts of time on tool calls, trying to get enough information to overcome a motivation that's essentially burned into its identity.

(We saw one conversation where it just browsed social media instead of looking at the code for like 5 minutes, which ... you know, I get it.)

It's been much more effective to make uncertainty or further exploration be part of the agents success criteria.

- BAD: "Critique your own thoughts" -> leads to the agent trying really hard to get it right, but still not willing to actually be wrong

- GOOD: "Expose where your thoughts are unsupported or could benefit from further information" -> leads to the agent producing high-quality results, with loose ends that the user can choose to incorporate, ignore, or correct.

That prompt, combined with dialing up the thinking (either with API or prompt tuning) works much better, because it's sidestepping the training and tuning that's implicitly encouraged it to sound correct at all times.

[0] https://tern.sh, code migration AI

searls · 4 months ago
Yeah this is a great comment and moves it forward. Will incorporate this into my personalization stanza.

I agree, I think one reason this strategy has not helped me with Claude Code is that it just leads to endless spinning. Not just tool churn, but ten, twenty, thirty revisions of a file listing that should've just been tested and declared done much sooner. By the time Claude gets around to _executing the code_ it's so far gone it can't even get back to successful compilation.

p.s. ChatGPT's hyperbole with this kind of personalization enabled is extremely embarrassing to read. It routinely opens paragraphs with things like "my skepticism is screaming" and I cringe pretty hard as I copy paste the working code underneath it.

frays · 4 months ago
Useful tips -

"Expose where your thoughts are unsupported or could benefit from further information"

is great. Adding it to my personalization.

shinycode · 4 months ago
Anxiety for AI ? I don’t follow all developments but it looks « weird » to me. Like AI could benefit from a psychologist or « psychology prompting » in its chain of thought like « don’t panic, you’re the best, you can do it » would have a positive outcome ? Pep talk for AI ?
DriverDaily · 4 months ago
We should probably be aiming for some stress, but not too much. See *Yerkes–Dodson*

https://en.wikipedia.org/wiki/Yerkes%E2%80%93Dodson_law

wyre · 4 months ago
LLM’s typically think and answer questions like a human would, so if prompted to think about if it’s answers are correct or good enough, it’s going to overthink those things, like someone with anxiety would.
scotty79 · 4 months ago
It's not that it gets anxiety. It's just that responses it starts to produce are similar to responses of anxious person. Shaky, unsure, indecisive, chaotic.
byronic · 4 months ago
You're 100% right. This kind of prompt just alters the dice probabilities within the word bag. The OP here is ridiculous (as in 'worthy of ridicule') by means of ascribing intent to sentences with a different rhetorical flavor _that was prompted by the person putting in a prompt_.

I am now fully of the opinion that LLM proponents should turn off their monitor to reveal the consciousness within the AI

ForHackernews · 4 months ago
This comment reads like anthropomorphism. I don't believe an LLM can feel anything, anxiety included.
scotty79 · 4 months ago
It's just a shorthand. It's obviously not anxious. Just produces more anxious responses, like an anxious person would. It's no more anthropomorphizing than saying the computer is thinking when the hourglass is spinning.
natrius · 4 months ago
You know, it doesn't really seem like a mistake for people to anthropomorphize the thinking machines.

Dead Comment

qwertox · 4 months ago
I had things like

* Do not use emotional reinforcement, not even robotic one (e.g., "Excellent," "Perfect," "Unfortunately", "Acknowledged").

* Do not use metaphors or hyperbole (e.g., "smoking gun," "major turning point").

in it. When they upgraded to GPT 5, Advanced Voice embedded into every answer things like:

---

Me: "I want to make a pizza tonight. What's the best way to start?"

It: "Sure, I'll stick to your guidelines and leave out the emotional reinforcements. If you want to make a pizza tonight, I'd first briefly explain how to go about it and why."

Me: "What should I drink with it?"

It: "To put it simply: Just drink whatever you like with it. Water, juice, or maybe a simple soft drink are classic options. It's really up to you, and I'll stick to your guidelines of saying it simply and without any frills. If you need anything else, just let me know.

Understood, I've noted that. I'll just stick to that and give you the information directly and without frills in the future. If you need anything else, just ask.

"

---

Absolutely useless. The next day my subscription expired, and when I talked with it again, it behaved like it used to prior to GPT 5: good.

I'm now big into Claude.

Also, before you ask: The chat about the pizza was just to test if it would behave that same way in every conversation.

baq · 4 months ago
Do not use ‘do not’. Remember the memes about generating pictures without elephants and the elephants were hidden on pictures or tvs?

Invert your logic (‘be straight and to the point; concise’, ‘use balanced and dry wording’) instead, it might not be a definite solution, but you want to avoid triggering the neuron instead of negating its activation.

mh- · 4 months ago
I see where you're coming from, but if you take a look at the system prompts for these models (some are public, some have partially or fully leaked), you'll see that is no longer a concern. At least not for the kind of models being discussed here.

That older generation of image diffusion models (e.g. Stable Diffusion) used text encoders like CLIP [0], which simply don't have the language understanding that even the smaller modern LLMs do.

Later image models moved on to using variants of T5 [1], sometimes in addition to CLIP variants (this is how FLUX.1 works).

The state of the art for open models in this regard (right now, likely out of date before I can finish formatting this comment..) is probably Qwen-Image [2] which uses Qwen2.5-VL [3]. That is a multimodal LLM with native vision capabilities in addition to text. It comes in a few sizes (up to 72 billion parameters), but the one commonly used with Qwen-Image is still 7b parameters.

[0]: https://openai.com/index/clip/

[1]: https://en.wikipedia.org/wiki/T5_(language_model)

[2]: https://arxiv.org/abs/2508.02324

[3]: https://arxiv.org/abs/2502.13923

bmurphy1976 · 4 months ago
Is there something, a blog post, research paper, or other that you know of that explains why this is the case? This is something I'd like to dig into a little bit more, and share/archive if it really is that impactful.
throwaway314155 · 4 months ago
All I can say to this is that I have disregarded this advice in the more crucial aspects of my system prompts/CLAUDE.md/etc.

Hasn't made a single bit of difference.

This notion that LLM's haven't made it over the "trouble with negatives" hurdle as a hard truth that will never be made untrue is absolutely absurd and bears zero evidence to the contrary with respect to models released in the past year.

kingkawn · 4 months ago
a mind we cannot physically intimidate forcing us to discover how to work well with others
zlies · 4 months ago
I had "Use always two space tab size" because I was tired of long tab widths when code was returned. However, even when it wasn't about programming, I was reminded that the tab size would be two spaces ...
qwertox · 4 months ago
Another rule was:

* Always use `-` instead of `–`, unless explicitly used by the user.

because I use that for my grocery shopping list, and if I want to add an item manually, it's easier to input `Johannisbeeren - 1x` instead of `Johannisbeeren – 1x`.

It resulted this

----

Me: "Tell me what's on TV tonight"

It: "I checked what's on TV tonight. For example, the spy comedy "Get Smart" [...]. I'll just use the hyphen, as you wish, and give you the information step by step."

----

Seriously?

cj · 4 months ago
Is Advanced Voice mode any better than it was a month or 2 ago?

I had to stop using it because with the “upgrade” a few months back, it felt like its IQ was slashed in half. Constantly giving short and half baked lazy answers.

NikolaNovak · 4 months ago
So it's not just me!

I loved it in winter, I used it to learn interesting things on long drives :). Then sometime in the spring:

1. The voice got more human, in the sense it was more annoying - doing all the things I'm constantly coached against and that I coach my team against (ending sentences in question voice, umms and ahms, flat reading of bullet points, etc).

2. Answers got much much shorter and more superficial, and I'd need six follow ups before leaving frustrated.

I haven't used advanced voice last two months because of this :-(

nickthegreek · 4 months ago
It's still not as good, and way less useful to me as it was before the advanced voice rollout. I recently found a setting to disable, but haven't tried it out yet to see if it fixed any of the many issues I have with advanced voice.
DrewADesign · 4 months ago
Yeah I was sold when I saw some very glitzy demos on YouTube but ditched it immediately. Useless, glib, sycophantic nonsense. It would be a great product if it did what it says it was supposed to do rather than just superficially appearing to do that unless you put in a shitload of effort mitigating their deliberate design decisions.
lostmsu · 4 months ago
Shameless self-plug: if you're on iPhone, try Roxy: https://apps.apple.com/app/apple-store/id6737482921?pt=12710...

You can connect and talk to any LLM you want (just switch in settings). I would suggest gemini-2.5-flash-lite for fast responses. API key for that can be obtained at https://aistudio.google.com/apikey

DrewADesign · 4 months ago
I just can't stomach the idea that I have to ask my product nicely to do it's fucking job because OpenAI designed it not to. This is not a technology problem-- it's a product design problem.
Timwi · 4 months ago
> I have to ask my product nicely to do it's fucking job because OpenAI designed it not to.

This honestly describes my experience with almost all software.

Deleted Comment

nickthegreek · 4 months ago
Recent additions I found BURIED in the settings.

Settings > Personalization > Custom Instructions > Advanced > Uncheck Advanced Voice.

mh- · 4 months ago
That disables GPT's ability to use that Tool altogether. Despite the confusing location, it doesn't have anything to do with whether it gets your Custom Instructions or not.
dmd · 4 months ago
Just FYI, that specific bug (repeating your custom stuff back to you in voice mode) was fixed a few days later.
iammjm · 4 months ago
No it wasn’t, it’s very much still there when using voice mode
yard2010 · 4 months ago
You are absolutely right.
N_Lens · 4 months ago
OpenAI have moved towards enshittifying their core product. I also moved to Claude.
stcg · 4 months ago
This sounds like how I think.

But for me, it often results in situations where I think much harder and longer than others but fail to act.

I learned to sometimes act instead of thinking more, because by acting I gain information I could not have learned by thinking.

Perhaps this human insight can be applied to working with LLMs. Perhaps not :)

searls · 4 months ago
Yeah, I've been griping about LLM overconfidence for years, as somebody who is racked with self-doubt and second-guessing. On one hand, my own low opinion of myself made me a terrible mentor and manager, because having a similarly zero-trust policy towards my colleagues' work caused no end of friction (especially as a founder where people looked up to me for validation). On the other hand, i don't know very many top-tier practitioners that don't exhibit significantly more self-doubt than an off-the-shelf LLM.

Hence this blog post. I will say I've got a dozen similar tricks baked into my Claude config, but I'm not sure they've helped any.

mikepurvis · 4 months ago
I relate to this a lot— I treat my colleagues' work with suspicion and distrust not because I don't trust them but because that's also my stance toward my own work, like "what is this BS? Is it absolutely necessary? Can it be half the length by leveraging a library or ignoring error handling in cases where a panic/crash is no worse than a controlled exit?"

I find working with copilot is just catnip to someone like this because it's endlessly willing to iterate and explore the problem space, certainly well past the point where a normal person would be like are you for real can we just merge this and move on.

freehorse · 4 months ago
Does any of these things actually work? When I pre-prompt models like this it most often than not it ends up making uninteresting and mostly irrelevant points, very rarely will it come back with something "insightful". It will just put some doubt because it has to without sth meaningful to notice. Maybe with agents it does not matter as much if you do not mind burning tokens and time, because you are not gonna read most of the output wrt tool use anyway, but for a chat model I don't see much benefit? It is a bit like encouraging an LLM to write tests/asserts in the code, ime it ends up checking for the most superficial things.
lotyrin · 4 months ago
I don't know how anyone can trust these things at all.

What I want: a balanced, nuanced, deep engagement with a topic I don't already have mastered that may need to challenge my intuition or require of me some effort to research and correct my conception.

When I ask it to explain something I already understand quite well, which there is no broad public consensus about or the public consensus is known to be based in misconception, I will tend to get the public consensus view that there's no clear answer or provided an answer based in the misconception.

When I make it at all clear about what my conception is, I'll get a confirmation or reinforcement of my conception.

If I play a bit as a character who believes the opposite of my conception, unless my conception has a very, very clear basis in public consensus, I will get a confirmation or reinforcement of the opposite of my conception.

Why should I trust them in fields I'm not an expert in, given this? They want to please and serve rather than challenge you or inform you. Even when you ask them to be blunt and factual they do the theater of those things and not the substance. Their basis in Human linguistic outcome dooms them to pretend to be human which means people-pleasing or social-consensus-finding goals over truth-finding goals.

pluc · 4 months ago
What you are describing is human interactions with another human who is knowledgeable in that specific field. AKA the before-AI.
lotyrin · 4 months ago
I think it's more the Before Internet, or at least before Eternal September.

Deleted Comment

anp · 4 months ago
Maybe I’m just falling victim to my own cognitive biases (and/or financial incentives as a Google employee), but I get the closest to that experience among the frontier labs’ chat interfaces when I chat with Gemini.
lotyrin · 4 months ago
https://g.co/gemini/share/0a0a38963df3

Edit to add commentary: Not the worst, but it still wants to fall into one or another ideology or consequences-detached philosophy (favorable to people because they provide prescriptions without requiring expertise).

Edit: my "out-of-character" message here is deliberately also played up a bit (overly judgmental and emotionally charged) and I kinda think it should challenge me for that.

vo2maxer · 4 months ago
Ironic. I’ve spent years with humans doing the same thing, only with more naps.
ajkjk · 4 months ago
This is unrelated but occurred to me as I was reading it:

It would be really amusing or entertaining or inspiring or something to see what the best possible outcome from an LLM-style chat session would be. E.g. get a bunch of brilliant humans together, ask them ChatGPT-style questions, but use their collective resources to craft perfect responses to everything (I mean, slowly, over days or months, of course). LLMs are useful sometimes, sure, but sometimes it feels like people think their LLM is being really useful because they have low standards for how good the answer could be.

like... we're trying to make LLMs really good at doing human stuff. But humans are also really bad at what LLMs do: always having experts available as resources that give really good, directed, thoughtful answers to questions on any subject and come up with sophisticated plans to execute on goals. Businesses kinda do this but it's all twisted up; you never get to know if you're doing anything 'right' or in the best known way; mostly everyone is just making everything up as they go in the context of what they happen to know. It would be nice for once to see what the best possible human effort on a task or question would look like.

e.g. you ask it a math question and it teaches you the answer perfectly and describes it in the most lucid possible way. Maybe it's a subject no one really teaches right and the answer involves reformulating it in a new presentation that is easier to understand, and it expresses itself with just the right amount of confidence, nothing bizarre. Then you ask it for advice and it gives really, really good human-to-human interpersonal advice, takes into account everything you say and really sees you as a person and understand what you're going through but also has an eye towards pushing you to be better instead of just validating what you think you need. Then you ask it to make a website and what you get is a brilliant piece of original software plus it factors out some of the functionality into well-documented, well-tested open source software with a good plan for improvement. Then you ask it to fix local politics in your city and it realizes it can't be fixed without running for office so it puts together a team and a platform and finds a good candidate and starts a campaign...

searls · 4 months ago
I had a very similar thought (which I removed from the final draft)

Yesterday without searching this prompt was able to speculate that my query about static vars on Swift actors was a sign of an underlying compiler bug.

Turns out, it WAS a compiler bug and fixed back in February. I have never found a compiler bug and I'm a Swift noob, but I was pretty impressed. (It's what led me to write this post) https://github.com/swiftlang/swift/issues/78435

drew870mitchell · 4 months ago
This is also where i'm at with ChatGPT. Its bullshit meter doesn't exist, but it has "read" everything in a way that i can't, so it's bad for truthiness, but it's great for search and correlating items in domains that are out of my scope. For those things outside my scope i ask for links to publication or discussions by real humans. If the source conflicts with the ChatGPT output (which does happen frequently) then i ditch the problem for a while. But it's great to replace hours of my own undirected haystack searching with getting credible needle candidates in a few minutes.
Terretta · 4 months ago
In my use it feels as though this should not be done in advance or in the same prompt, even with reasoning models. It'd be better to make a "double check" MCP that calls your prompt, asks whether anything should be amended in that reply or use as is, amends if needed, then gives answer.

What you do not want to do is reposition your context into under-informed persona space, so leave the second-guessing out of the initial context. Instead use it as its own judge. (Doesn't have to be, but could also be an alt model.)

throwaway713 · 4 months ago
Ah, ChatGPT’s hidden INTP mode. We’ll finally get the right theory for ASI, but it will provide no clues on how to actually implement it in a timely manner.