Is that true? Is it anything more complicated than LLMs producing text optimized for plausibility rather than for any sort of ground version of truth?
In the end, <current AI model> is driven towards engagement and delivering an answer and that drives it towards generating false answers when it doesn't know or understand.
If it was more personality controlled, delivering more humble and less confident answers or even making it say that it doesn't know would be a lot easier.
I’d say, stick to your guns and find a job that supports your morales, not the other way around.