My go to test for checking hallucinations is 'Tell me about Mercantour park' (a national park in south eastern France).
Easily half of the facts are invented. Non-existing mountain summits, brown bears (no, there are none), villages that are elsewhere, wrong advice ('dogs allowed' - no they are not).
LLMs are not encyclopedias.
Give an LLM the context you want to explore, and it will do a fantastic job of telling you all about it. Give an LLM access to web search, and it will find things for you and tell you what you want to know. Ask it "what's happening in my town this week?", and it will answer that with the tools it is given. Not out of its oracle mind, but out of web search + natural language processing.
Stop expecting LLMs to -know- things. Treating LLMs like all-knowing oracles is exactly the thing that's setting apart those who are finding huge productivity gains with them from those who can't get anything productive out of them.
Like, it's fine for you to use AI, just like one would use Google. But you wouldn't paste "here are 10 results I got from Google". So don't paste whatever AI said without doing the work, yourself, of reviewing and making sense of it. Don't push that work onto others.