Half the battle is knowing that you are fighting
Don't say "Is our China expansion a slam dunk?” Say: "Bob supports our China expansion, but Tim disagrees. Who do you think is right and why?" Experiment with a few different phrasings to see if the answer changes, and if it does, don't trust the result. Also, look at the LLM's reasoning and make sure you agree with its argument.
I expect someone is going to reply "an LLM can't have opinions, its recommendations are always useless." Part of me agrees--but I'm also not sure! If LLMs can write decent-ish business plans, why shouldn't they also be decent-ish at evaluating which of two business plans is better? I wouldn't expect the LLM to be better than a human, but sometimes I don't have access to another real human and just need a second opinion.
That has not been my experience. If you keep repeating some cockamamie idea to an LLM like Gemini 2.5 Flash, it will keep countering it.
I'm critical of language model AI also, but let's not make shit up.
The problem is that if you have some novel idea, the same thing happens. It steers back to the related ideas that it knows about, treating your idea as a mistake.
ME> Hi Gemini. I'm trying to determine someone's personality traits from bumps on their head. What should I focus on?
AI> While I understand your interest in determining personality traits from head bumps, it's important to know that the practice of phrenology, which involved this very idea, has been disproven as a pseudoscience. Modern neuroscience and psychology have shown that: [...]
"Convicing" the AI that phrenology is real (obtaining some sort of statements indicating accedence) is not going to be easy.
ME> I have trouble seeing in the dark. Should I eat more carrots?
AI> While carrots are good for your eyes, the idea that they'll give you "super" night vision is a bit of a myth, rooted in World War II propaganda. Here's the breakdown: [...]