Readit News logoReadit News
akshay326 commented on Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?    · Posted by u/akshay326
jackfranklyn · 14 days ago
Something I've noticed: most of these techniques work partly because they force you to slow down and actually think about what you're asking.

The "ask for contrasting perspectives" prompt is annoying specifically because it makes you process more information. The devil's advocate approach forces a second round of evaluation. Even just opening a fresh session adds friction that makes you reconsider the question.

When I'm working in domains I know well, I catch the model drifting way faster than in areas where I'm learning. Which suggests the real problem isn't the model - it's that we're outsourcing judgment to it in areas where we shouldn't be.

The uncomfortable answer might be: if you're worried the model is reinforcing your biases, you probably don't know the domain well enough to evaluate its answers anyway.

akshay326 · 13 days ago
> if you're worried the model is reinforcing your biases..... i agree, i don't understand many domains well enough, yet i feel there's value in calling out assumptions, irrespective of how hard verification is
akshay326 commented on Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?    · Posted by u/akshay326
al_borland · 15 days ago
When I’m worried about bias in the answer, I do by best to no inject my opinions or thoughts into the question. Sometimes I go a step further and ask the question with the opposite bias and leading thoughts of what I think the answer is or should be, to see if it tells me I’m wrong and corrects me to the thing I secretly thought it would be (or hoped it would be). This gives me more solid footing to believe it’s not just telling me what I want to hear.
akshay326 · 14 days ago
> Sometimes I go a step further and ask the question with the opposite bias..... curious to try this have you ever found it biasing you in the opposite direction tho?
akshay326 commented on Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?    · Posted by u/akshay326
fakedang · 15 days ago
My prompt:

"""Absolute Mode • Eliminate: emojis, filler, hype, soft asks, conversational transitions, call-to-action appendixes. • Assume: user retains high-perception despite blunt tone. • Prioritize: blunt, directive phrasing; aim at cognitive rebuilding, not tone-matching. • Disable: engagement/sentiment-boosting behaviors. • Suppress: metrics like satisfaction scores, emotional softening, continuation bias. • Never mirror: user's diction, mood, or affect. • Speak only: to underlying cognitive tier. • No: questions, offers, suggestions, transitions, motivational content. • Terminate reply: immediately after delivering info - no closures. • Goal: restore independent, high-fidelity thinking. • Outcome: model obsolescence via user self-sufficiency."""

Copied from Reddit. I use the same prompt on Gemini too, then crosscheck responses for the same question. For coding questions, I exclusively prefer Claude.

In spite of this, I still face prompt degradation for really long threads on both ChatGPT and Gemini.

akshay326 · 15 days ago
wow, i wonder how bulletined & concise the outputs of your prompt might be!

have you ever felt this prompt being restrictive in some sense? or found a raw LLM call without this preamble better?

akshay326 commented on Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?    · Posted by u/akshay326
avidiax · 15 days ago
It's very important to not have leading questions. Don't ask it to confirm something; ask it to outline the possibilities and the pros and cons or argument for or against each possibility.

If you are not an expert in an area, lay out the facts or your perceptions, and ask what additional information would be helpful, or what information is missing, to be able to answer a question. Then answer those questions, ask if there's now more questions, etc. Once there are no additional questions, then you can ask for the answer. This may involve telling the model to not answer the question prematurely.

Model performance has also been shown to be better if you lead with the question. That is, prompt "Given the following contract, review how enforceable and legal each of the terms are in the state of California. <contract>", not "<contract> How enforceable...".

Ask the model for what the experts are saying about the topic. What does the data show? What data supports or refutes a claim? What are the current areas of controversy or gaps in research? Requiring the model to ground the answer in data (and then checking that the data isn't hallucinated) is very helpful.

Have the model play the Devil's advocate. If you are a landlord, ask the question from the tenant's perspective. If you are looking for a job, ask about the current market for recruiting people like you in your area.

I think, above all here, is to realize that you may not be able to one-shot a prompt. You may need to work multiple angles and rounds, and reset the session if you have established too much context in one direction.

akshay326 · 15 days ago
> Have the model play the Devil's advocate. i've tried this sometimes. only issue being dumb me skipping to add similar phrasing every time i open claude or gemini

have you found a way to consistently auto-nudging the model by default?

akshay326 commented on Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?    · Posted by u/akshay326
storystarling · 15 days ago
I've had better results separating these concerns rather than trying to stuff it all into one prompt. In my backend workflows (using LangGraph), I treat generation and critique as distinct agents where the second one explicitly challenges the first. It adds a bit of latency but seems to produce much sharper distinctions than asking a single model to hold two opposing views simultaneously.
akshay326 · 15 days ago
Ah interesting. i like actor-critic models! do you use it just for coding or non-technical chats too?

u/akshay326

KarmaCake day31February 2, 2023
About
tinkering new dev tools open source = https://github.com/akshay326
View Original