Readit News logoReadit News
motorest · 3 months ago
Taken from the blog:

> Why are we talking about “graduate and PhD-level intelligence” in these systems if they can’t find and verify relevant links — even directly after a search?

This is my pet peeves, and recently OpenAI's models seem to have become very militant in how they stand by and push their obviously hallucinated sources. I'm talking about hallucinating answers, when pressed to cite sources they also hallucinate URLs that never existed, when repeatedly prompted to verify how the are hallucinating the stick to their clearly wrong output, and ultimately fall back to claiming they were right but the URL somehow changed even though it never existed ever.

In order to start talking about PhD-level intelligence, in the very least these LLMs must support PhD-level context-seeking and information verification. It is not enough to output a wall of text that reads quite fluently. You must stick to verifiable facts.

krzat · 3 months ago
The approach of generating something and then looking for hallucinations is just stupid. To validate the output I have to be an expert. How do I become an expert if rely on LLMs? It's a dead end.
motorest · 3 months ago
> The approach of generating something and then looking for hallucinations is just stupid. To validate the output I have to be an expert.

No. You only need to check for sources, and then verify these sources exist and they support the claims.

It's the very definition of "fact".

In some cases, all you need to do is check if a URL that was cited does exist.

vanschelven · 3 months ago
Including literal 404s... As an outsider it has always struck me as absurd that they don't just do the equivalent of wget over all provided sources.
alkonaut · 3 months ago
Or why the LLM doesn’t do a lookup into a subset of the training data as a database and reject the output if it seems to be wrong. A billion of the most urls and the entirety of Wikipedia, arkiv and stackoverflow would go a long way.
nkrisc · 3 months ago
Seems like the LLM is giving correct output if it’s generating a plausible string of tokens in response to your string of tokens.
motorest · 3 months ago
> Seems like the LLM is giving correct output if it’s generating a plausible string of tokens in response to your string of tokens.

No. If you prompt it to get a response and then you ask it to cite sources, if it outputs broken links that never existed then it clearly failed to deliver correct output.

thom · 3 months ago
I have search enabled 100% of the time with ChatGPT and would never go back to raw-dogging LLM citations. O3 especially has passed the threshold of “not always annoying”. Had an argument with Gemini yesterday where it was insisting on some hallucinated implementation of a function even while giving me a GitHub link to the correct source.

Dead Comment

esafak · 3 months ago
This is trivial to overcome by using a REST client to verify the link through MCP, and by caching results it wouldn't even add much latency.
simonw · 3 months ago
The key thing I got from this article is that the o3 and Claude 4 projects (I'm differentiating from the models here because the harness of tools around them is critical too) are massively ahead of GPT 4.1 and Gemini 2.5 when it comes to fact checking in a way that benefits from search and web usage.

The o3 finding matches my own experience: https://simonwillison.net/2025/Apr/21/ai-assisted-search/#o3...

Both o3 and Claude 4 have a crucial new ability: they can run tools such as their search tool as part of their "reasoning" phase. I genuinely think this is one of the most exciting new advances in LLMs in the last six months.

simonw · 3 months ago
Products, not projects.
zone411 · 3 months ago
If anyone is interested in a larger sample size comparing how often LLMs confabulate answers based on provided texts, I have a benchmark at https://github.com/lechmazur/confabulations/. It's always interesting to test new models with it because the results can be unintuitive compared to those from my other benchmarks.
dr_kiszonka · 3 months ago
Useful benchmark. I noticed o3-high hallucinating too often for such a good model, but it is usually great with search. In my experience, Claude Opus & Sonnet 4 consistently lie, cheat, and try to hide their tracks. Maybe they are good in writing code but I don't trust them with other things.
dedicate · 3 months ago
It's not just that they get links wrong, it's how they get them wrong – like, totally fabricating them and then doubling down! A human messing up a citation is one thing, but this feels... different, almost like a creative act of deception, lol.
milleramp · 3 months ago
Took some time to realize the SIFT toolbox mentioned in the article is not a Scale-Invariant Feature Transform toolbox.
SubiculumCode · 3 months ago
I do wonder about the role of test time compute in the blog post in terms of document understanding. A non reasoning output (or low test time compute setting) might easily misinterpret the text, but reasoning models can second guess, consider multiple objectives in turn, and can right the ship.

I note that Gemini 2.5 has one of the lowest confabulation/hallucination rates according to this benchmark [1], so am surprised by the results in the blog.

Also, I have found link hallucination and output quality improve when you restrict searches to, for example, only pubmed sources, and to provide the source link directly into the text (as opposed to Gemini deep research usual method for citation).

One reason, I think, is that unrestricted search will get the paper, the related blog posts and press releases, weight them as equal (and independent!) sources of a fact, when we know that nuance is lost in the latter, and maybe because it will then spend more test time compute in the quality sources, not the press-releases.

[1]https://github.com/lechmazur/confabulations/

eviks · 3 months ago
> Why are we talking about “graduate and PhD-level intelligence” in these systems if they can’t find and verify relevant links

For exactly the same reason the author markets his tool as a research assistant

> It also models an approach that is less chatbot, and more research assistant in a way that is appropriate for student researchers, who can use it to aid research while coming to their own conclusions.

diego_moita · 3 months ago
I have a strange feeling: it seems that original insights and hallucinations are related. One seems to come very frequently with the other.

I've noticed that o3 is the one that lies with the most conviction (compared to Gemini Pro and Claude Sonnet). It will be the hardest to convince that it is wrong, will invent excuses and complex explanations for its lies, almost to a Trump level of lying and deception.

But it is also the one that provides the most interesting insights, that will look at what others don't see.

There might some kind deep truth in this correlation. Or it might be myself having an hallucination...