Readit News logoReadit News
softwaredoug · 4 months ago
I agree with Simon’s article but I usually think about “research” to mean comparing different kinds of evidence (not just the search part). Like evidence for the effectiveness of Obamacare. Or how some legal case may play out in the courts. Or how much The Critic influenced The Family Guy. Or even what the best way to use X feature of Y library.

I’ve found ChatGPT and other LLMS can struggle to evaluate evidence - to understand the biases behind sources - ie taking data from a sketchy think tank as gospel. I also have found in my work the more reasoning, the more hallucination. Especially when gathering many statistics.

That plus the usual sycophancy can cause the model to really want to find evidence to support your position. Even if you don’t think you’re asking a leading question, it can really want to answer your question in the affirmative.

I always ask ChatGPT do directly cite and evaluate sources. And try to get it in the mindset of comparing and contrasting arguments for and against. And I find I must argue against its points to see how it reacts.

More here https://softwaredoug.com/blog/2025/08/19/researching-with-ag...

NothingAboutAny · 4 months ago
I tried to use perplexity to find ideal settings for my monitor, it responded with concise list of distinct settings and why. When I investigated the source it was just people guessing and arguing with each other in the Samsung forums, no official or even backed up information.

I'd love if it had a confidence rating based on the sources it found or something, but I imagine that would be really difficult to get right.

Moosdijk · 4 months ago
I asked gemini to do a deep research on the role of healthcare insurance companies in the decline of general practicioners in the Netherlands. It based its premise mostly on blogs and whitepapers on company websites, who's job it is to sell automation-software.

AI really needs better source-validation. Not just to combat the hallucination of sources (which gemini seems to do 80% of the time), but also to combat low quality sources that happen to correlate well to the question in the prompt.

It's similar to Google having to fight SEO spam blogs, they now need to do the same in the output of their models.

ugh123 · 3 months ago
Seems like the right outcome was had, by reviewing sources. I wish it went one step further and loaded those source pages and scroll/highlight the snippets where it pulled information from. That way we can easily double check at least some aspects of it's response, and content+ads can be attributed to the publisher.
wodenokoto · 4 months ago
But the really tricky thing is, that sometimes it _is_ these kinds of forums where you find the best stuff.

When LLMs really started to show themselves, there was a big debate about what is truth, with even HN joining in on heated debates on the number of sexes or genders a dog may have and if it was okay or not for ChatGPT to respond with a binary answer.

On one hand, I did found those discussions insufferable, but the deeper question - what is truth and how do we automated the extraction of truth from corpora - is super important and somehow completely disappeared from the LLM discourse.

stocksinsmocks · 3 months ago
In the absence of easily found authoritative information from the manufacturer, this would have been my source of information. Internet banter might actually be the best available information.
simonw · 4 months ago
It would be interesting to see if that same question against GPT-5 Thinking produces notably better results.
killerstorm · 4 months ago
FWIW GPT-5 (and o3, etc.) is one of the most critical-minded LLMs out there.

If you ask for information which is e.g. academic or technical it would cite information and compare different results, etc, without any extra prompt or reminder.

Grok 4 (at the initial release) was just reporting information in the articles it found without any analysis.

Claude Opus 4 also seems bad: I asked it to give a list of JS libraries of a certain kind in deep research mode, and it returned a document focused on market share and usage statistics. Looks like it stumbled upon some articles of that kind and got carried away by it. Quite bizarre.

So GPT-5 is really good in comparison. Maybe not perfect in all situations, but perhaps better than an average human

eru · 4 months ago
> So GPT-5 is really good in comparison. Maybe not perfect in all situations, but perhaps better than an average human

Alas, the average human is pretty bad at these things.

btmiller · 4 months ago
How are we feeling about the usage of the word research to indicate feature sets in LLMs? Is it truly representative of research? How does it compare to the colloquial “do your research” refrain used often during US election years?
softwaredoug · 4 months ago
Well I will just need to start saying “critical thinking”? Or some other term?

I have a liberal arts background. So I use the term research to mean gathering evidence, evaluating its trustworthiness and biases, and avoiding related thinking errors related to evaluating evidence (https://thedecisionlab.com/biases).

LLMs can fall prey to these problems as well. Usually it’s not just “reasoning” that gives you trouble. It’s the reasoning about evidence. I see this with Claude Code a lot. It can sometimes create some weird code, hallucinating functionality that doesn’t exist, all because it found a random forum post.

I realize though that the term is pretty overloaded :)

gonzobonzo · 4 months ago
> I’ve found ChatGPT and other LLMS can struggle to evaluate evidence - to understand the biases behind sources - ie taking data from a sketchy think tank as gospel.

This is what I keep finding, it mostly repeats surface level "common knowledge." It usually take a few back and forths to get to whether or not something is actually true - asking for the numbers, asking for the sources, asking for the excerpt from the sources where they actually provide that information, verifying to make sure it's not hallucinating, etc. A lot of the time, it turns out its initial response was completely wrong.

I imagine most people just take the initial (often wrong) response at face value, though, especially since it tends to repeat what most people already believe.

athrowaway3z · 4 months ago
> It usually take a few back and forths to get to whether or not something is actually true

This cuts both ways. I have yet to find an opinion or fact I could not make chatgpt agree with as if objectivly true. Knowing how to trigger (im)partial thought is a skill in and of itself and something we need to be teaching in school asap. (Which some already are in 1 way or another)

thom · 4 months ago
Yeah trying to make well-researched buying decisions for example is really hard because you'll just quite a lot of opinions dominated by marketing material, which aren't well counterbalanced by the sort of angry Reddit posts or YouTube comments I'd often treat as red flags.
vancroft · 4 months ago
> I always ask ChatGPT do directly cite and evaluate sources. And try to get it in the mindset of comparing and contrasting arguments for and against. And I find I must argue against its points to see how it reacts.

Same here. But it often produces broken or bogus links.

Dead Comment

lambda · 4 months ago
I guess the part where I'm still skeptical are: Google is also still pretty good at search (especially if I avoid the AI summary with udm=14).

I'll take one of your examples: Britannica to seed Wikipedia. I searched for "wikipedia encyclopedia brtannica". In less than 1 second, I got search results back.

I spend maybe 30 seconds scanning the page; past the Wikipedia article on Encyclopedia Britannica, past the Encyclopedia article about Wikipedia, past a Reddit thread comparing them, past the Simple English Wikipedia article on Britannica, and past the Britannica article on Wiki. OK, there it is, the link to "Wikipedia:WikiProject Encyclopaedia Britannica", that answers your question.

Then to answer your follow up, I spend a couple more seconds to search Wikipedia for Wikipedia, and find in the first paragraph that it was founded in 2001.

So, let's say a grand total of 60 seconds of me searching, skimming, and reading the results. The actual searching was maybe 2 or 3 seconds of time total, once on Google, and once on Wikipedia.

Compared to nearly 3 minutes for ChatGPT to grind through all of that, plus the time for you to read it, and hopefully verify by checking its references because it can still hallucinate.

And what did you pay for the privilege of doing that? How much extra energy did you burn for this less efficient response? I wish that when linking to chat transcripts like you do, ChatGPT would show you the token cost of that particular chat

So yeah, it's possible to do search with ChatGPT. But it seems like it's slower and less efficient than searching and skimming yourself, at least for this query.

That's generally been my impression of LLMs; it's impressive that they can do X. But when you add up all the overhead of asking them to do X, having them reason about it, checking their results, following up, and dealing with the consequences of any mistakes, the alternative of just relying on plain old search and your own skimming seems much more efficient.

plopilop · 4 months ago
Agree. I tried the first 3 examples:

* "Rubber bouncy at Heathrow removal" on Google had 3 links, including the one about SFO from which chatGPT took a tangent. While ChatGPT provided evidence for the latest removal date being of 2024, none was provided for the lower bound. I saw no date online either. Was this a hallucination?

* A reverse image lookup of the building gave me the blog entry, but also an Alamy picture of the Blade (admittedly this result can have been biased by the fact the author already identified the building as the blade)

* The starbucks pop Google search led me to https://starbuckmenu.uk/starbucks-cake-pop-prices/. I will add that the author bitching to ChatGPT about ChatGPT hidden prompts in the transcript is hilarious.

I get why people prefer ChatGPT. It will do all the boring work of curating the internet for you, to privde you with a single answer. It will also hallucinate every now and then but that seems to be a price people are willing to pay and ignore, just like the added cost compared to a single Google search. Now I am not sure how this will evolve.

Back in the days, people would tell you to be weary of the Internet and that Wikipedia thing, and that you could get all the info you need from a much more reliable source at the library anyways, for a fraction of the cost. I guess that if LLMs continue to evolve, we will face the same paradigm shift.

animal531 · 4 months ago
I'm going to somewhat disagree based on my recent attempts.

Firstly, if we don't remove the Google AI summary then as you rightly say, it makes the experience 10x worse. They try to still give an answer quickly, but the AI takes up a ton of space and is mostly terrible.

Googling for a Github repository just now, Google linked me to 3 resources except the actual page. One clone that was named the same, another garbage link but luckily the 3rd was a reddit post by the same person which linked to the correct page.

GPT does take a lot longer, but the main advantage for me comes in depending on the scope of what you're looking for. In the above example I didn't mind Google, because the 3 links opened fast and I could scan and click through to find what I was looking for, ie. I wanted the information right now.

But then let's say I'm interested in something a bit deeper, for example how did they do the unit movement in StarCraft 2? This is a well known question, so the links/info you get from either Google or GPT are all great. If I was searching this topic via Google I'd then have to copy or bookmark the main topics to continue my research on them. Doing it via GPT it returns the same main items, but I can very easily tell it to explain all those topics in turn, have it take the notes, find source code, etc.

Of course as in your example, if you're a Doctor and you're googling symptoms or perhaps real world location of ABC then the hallucination specter is a dangerous thing which you want to avoid at all costs. But for myself I find that I can as easily filter LLM mistakes as I can noise/errors from manual searches.

My future Internet guess is going to be that in N years there will be no such thing as manually searching for anything, everything will be assistant driven via LLM.

simonw · 4 months ago
I suggest trying that experiment again but picking the hardest of my examples to answer with Google, not the easiest.
lambda · 3 months ago
Not sure which is the hardest, but sure, let's try them all.

* Bouncy people mover. Some Google searching turns up the SFO article that you liked. Trying to pin down the exact dates is harder. ChatGPT maybe did narrow down the time frame quicker than I could through a series of Google searches,

* The picture of the building. Go to Google lens, paste in the image, less than a second later I get results. Of course, the exact picture in this article comes up on top, but among the other results I get a mix of two different buildings, one of which is identified as the Blade, one Independence Temple. So a few seconds here between searching and doing my own quick visual scan of the results.

* Starbucks UK Cake Pops: This one is harder to find the full details with a quick Google search. I am able to find that the were fairly recently introduced in the UK after my second search. It looks like ChatGPT gave you a bunch of extra response, some of which you didn't like, because you then spent a while trying to reverse engineer its system prompt rather than any actual follow up on the question itself.

* Official name of the University of Cambrdige: search gave me Wikipedia, Wikipedia contains the official name and a link to a reference on the University's page. Pretty quick to solve with Google Search/Wikipedia.

* Exeter quay. I searched for "waterfront exeter cliff building" and found this result towards the top of the results: https://www.exeterquay.org/milestones/ which explains "Warehouses were added in 1834 [Cornish's] and 1835 [Hooper's], with provision for storing tobacco and wine and cellars for cider and silk were cut into the cliffs downstream." You seemed to be a lot more entertained by ChatGPT's persistence in finding more info, but for satisfying curiosity about the basic question, I got an answer pretty quickly via Google.

* Aldi vs Lidl: this is a much more subjective question, so whether the results you get via a quick Google search meet your needs, vs. whether the summary of subjective results you get via ChatGPT, is more of a question you can answer. I do find some Reddit threads and similar with a quick Google search.

* Book scanning. You asked specifically about destructive book scanning. You can do a quick search of each of the labs and "book scanning" and find the same lack of results that ChatGPT gives you. Maybe takes a similar amount of time to how long it spent thinking. You pretty much only find references to Anthropic doing destructive book scanning, and Google doing mostly non-destructive scanning

Anyhow, the results are mixed. For a bunch of these, I found an answer quicker via a Google search (or Google Lens search), and doing some quick scanning/filtering myself. A few of them, I feel like it was a wash. A couple of them actually do take more iteration/research, the bouncy travelator being the most extreme example, I think; narrowing down the timeline on my own would take a lot of detailed looking through sources.

IanCal · 4 months ago
As a counterpoint I asked that simple question to gpt5 in auto mode and it started replying in two seconds, wrote fast enough for me to scan the answer and gave me two solid links to read after.

With thinking it took longer (just shy of two minutes) but compared a variety of different sources, and comes back with numbers and each statement in the summary sourced.

I’ve used gpt a bunch for finding things like bin information on the council site that I just couldn’t easily find myself. I’ve also sent it off to dig through prs, specs and more for matrix where it found the features and experimental flags required to solve a problem I had. Reading that many proposals and checking what’s been accepted is a massive pain and it solved this while I went to make a coffee.

dwayne_dibley · 4 months ago
I wonder how all this will really change the web. In your manual mode, you a human, are viewing and visiting webpages, but if one never needs to and always interacts with the web through an agent, what does the web need to look like, and will people even bother making websites? Interesting times ahead.
gitmagic · 4 months ago
I’ve been thinking about this as well. Instead of making websites, maybe people will make something else, like some future version of MCP tools/servers? E.g. a restaurant could have an “MCP tool” for checking opening hours, reserving a table, etc.
bgwalter · 4 months ago
Yes, Google with udm=14 is much better than "AI". "AI" might work for the trivia-type questions from this article, which most people aren't interested in to begin with.

It fails completely for complex political or investigative questions where there is no clear answer. Reading a single Wikipedia page is usually a better use of one's time:

You don't have to pretend that you are parallelizing work (which is just for show) while waiting three min for the "AI" answer. You practice speed reading and memory retention. You enhance your own semantic network instead of the network owned and controlled by oligopoly members.

Faaak · 3 months ago
A bit unrelated, but on Firefox there's the Straight to the Web extension that automatically appends the udm=14 param, so AI gets disabled :-)
wilg · 4 months ago
First, you not having to spend the 60 seconds and it means you can parallelize it with something else to get the answer effectively instantly. Second, you're essentially establishing that if an LLM can get it done in less than 60 seconds its better than your manual approach, which is a huge win, as this will get faster!
sigmoid10 · 4 months ago
For real. This is what it must have been like living in the early 20th century and hearing people say they prefer a horse to get groceries because it is so much more effort to crank-start a car. I look forward to the age when we gleefully reminisce about the time we had to deal with SEO spam manually.
lambda · 3 months ago
There's no useful parallelization that could happen during this particular search. This took a couple of iterations of research via ChatGPT, and then reading the results and looking at the referenced sources; the total interaction time with ChatGPT is a similar 60 seconds or so, the main difference is the 3 minutes of waiting for it to generate answers vs. the maybe a couple of seconds for the searches.
utyop22 · 4 months ago
V nice post. Captures my sentiment too
Jordan-117 · 4 months ago
It really is great. When I was still on Reddit, I made regular use of the "Tip of My Tongue" sub to track down obscure stuff I half-remembered from years ago. It mostly worked, but there were a few stubborn cases that went unsolved, even after pouring every ounce of my Google Fu into the endeavor. I recently took the text of these unsolved posts and submitted them to Deep Research -- and within an hour, it had cracked four of them, and put me on track to find a fifth myself. Even if the reasoning part isn't entirely up to par, there's still something really powerful about being able to rapidly digest dozens of search results and pull out relevant information based on a loose description. And now I can have that kind of search power on demand in just a few minutes, without having to deal with Reddit's spambots and post filters and hordes of users who don't read the question or follow the sub's basic rules.
OfflineSergio · 4 months ago
When it comes to Information Retrieval, you can get anything between links to existing documents or generated content based on those processed information. I agree that the second one is really powerfuly and just amazing and seemilngly useful. But the fact that it can also be wrong in more cases and I won't know keep being reminded to my using it for things I'm not good at and they just don't work s they should.

I just wish the business models could justify a confidence level being attached to the response.

larsiusprime · 4 months ago
I find ChatGPT to be great at research too-but there are pathological failure modes where it is biased to shallow answers that are subtly wrong, even when definitive primary sources are readily available online:

https://www.fortressofdoors.com/researchers-beware-of-chatgp...

ants_everywhere · 4 months ago
This isn't really how you described. You have an opinion that conflicts with the research literature. You published a blog about that opinion, and you want ChatGPT to say you're to accept your view.

Your view is grinding a political axe and I don't think you're in a position to objectively assess whether ChatGPT failed in this case.

larsiusprime · 4 months ago
What are you talking about? There are verifiable primary sources that ChatGPT was not citing. There are direct primary historical sources that lay out the full budget of the historical German colony in extreme detail, that directly contradict assertions made in the Silagi paper, that’s not a matter of opinion that’s a matter of verifiable fact.

Also what “axe” am I grinding? The findings are specifically inconvenient for my political beliefs, not confirming my priors! My priors would be flattered if Silagi was correct about everything but the primary sources definitively prove he’s exaggerating.

> You published a blog about that opinion, and you want ChatGPT to say you're to accept your view.

False, and I address this multiple times in the piece. I don’t want ChatGPT to mindlessly agree with me, I want it to discover the primary source documents.

eru · 4 months ago
Hmm, I suspect if ChatGPT would pay more attention to the German sources, they would perhaps find that supposedly right answer?

I wonder if asking ChatGPT in German would make a difference.

typpilol · 4 months ago
Yea this isn't really a chat gpt problem as a source credibility problem no?
jbm · 4 months ago
Yes, this is very much my experience too.

Switching to GPT5 Thinking helps a little, but it often misses things that it wouldn't when I was using o3 or o1.

As an example, I asked it if there were any incidents involving Botchan in an Onsen. This is a text that is readily available and must have been trained on; in the book, Botchan goes swimming in the onsen, and then is humiliated when the next time he comes back, there is a sign saying "No swimming in the Onsen".

According to GPT5 it gives me this, which is subtly wrong.

> In the novel, when Botchan goes to Dōgo Onsen, he notes the posted rules of the bath. One of them forbids things like: > “No swimming in the bath.” (泳ぐべからず) > “No roughhousing / rowdy behavior.” (無闇に騒ぐべからず) > Botchan finds these signs funny because he’s exactly the sort of hot-headed, restless character who might be tempted to splash around or make noise. He jokes in his narration that it seems as though the rules were written specifically to keep people like him out.

Incidentally, Dogo Onsen still has the "No swimming sign", or it did when I went 10 years ago.

black_knight · 4 months ago
I feel like the value of my plus subscription went down when they released GPT-5, it feels like a downgrade from o3. But of course OpenAI being not open, there is no way for me to know now.
simianwords · 4 months ago
I found your article interesting and it is relevant to the discussion. To be honest, while I think GPT could have performed better here, I think there is something to be said about this:

There is value in pruning the search tree because the deeper nodes are usually not reputable. I know you have cause to believe that "Wilhelm Matzat" is reputable but I don't think it can be assumed generally. If you were to force GPT to blindly accept counter points from people - the debate would never end. And there has to be a pruning point at which GPT would accept this tradeoff: maybe the less reputable or well known sources may have a correct point at the cost of being incorrect more often due to taking an incorrect analysis from a not well known source.

You could go infinitely deep into any analysis and you will always have seemingly correct points on both sides. I think it is valid for GPT to prune the search at a point where it converges to what society at large believes. I'm okay with this tradeoff.

larsiusprime · 4 months ago
My contention is if it’s going to just give me a Wikipedia summary, I can do that myself. I just have greater expectations of “PhD” level intelligence.

If we’re going to claim to it is PhD level it should be able to do “deep” research AND think critically about source credibility, just as a PhD would. If it can’t do that they shouldn’t brand it that way.

Also it’s not like I’m taking Matzat’s word for anything. I can read the primary source documents myself! He’s also hardly an obscure source, he’s just not listed on Wikipedia.

Helmut10001 · 4 months ago
More recently, I find ChatGPT to become increasingly unreliable. It makes up almost every second answer, forgets context, or is just downright wrong. Maybe I am used these days more and more to dump huge texts for context into the prompt, as aistudio allows me. Maybe ChatGPT isn't as good as with such information. Gemini/Aistudio will stay on track even with 300k tokens consumed, it just needs a little nudge here and there.
herewegohawks · 3 months ago
FWIW, I found things improved greatly once I turned off the memory feature of ChatGPT. My guess is that a lot of tokens were going towards trying to follow instructions from past conversations.
kmijyiyxfbklao · 3 months ago
This doesn't tell us much. I don't know why you would expect ChatGPT to do original PhD research. It's a general product that will trust already published research. That doesn't meat that GPT-5 can't do PhD research, when given the right sources.
psadri · 4 months ago
I do miss the earlier "heavy" models that had encyclopedic knowledge vs the new "lighter" models that rely on web search. Relying on web search surfaces a shallow layer of knowledge (thanks to SEO and all the other challenges of ranking web results) vs having ingested / memorized basically the entirety of human written knowledge beyond what's typically reachable within the first 10 results of a web search (eg: digitized offline libraries).
hamdingers · 4 months ago
I feel the opposite. Before I can use information from a model's "internal" knowledge I have to engage in independent research to verify that it's not a hallucination.

Having an LLM generate search strings and then summarize the results does that research up front and automatically, I need only click the sources to verify. Kagi Assistant does this really well.

beefnugs · 4 months ago
So does anyone have any good examples of it effectively avoiding the blogspam and SEO? Or being fooled by it? How often either way?
mastercheif · 4 months ago
I kept search off for a long time due to it tanking the quality of the responses from ChatGPT.

I recently added the following to my custom instructions to get the best of both worlds:

# Modes

When the user enters the following strings you should follow the following mode instructions:

1. "xz": Use the web tool as needed when developing your answer.

2. "xx": Exclusively use your own knowledge instead of searching the internet.

By default use mode "xz". The user can switch between modes during a chat session. Stay with the current mode until the user explicitly switches modes.

ants_everywhere · 4 months ago
Most real knowledge is stored outside the head, so intelligent agents can't rely solely on what they've remembered. That's why libraries are so fundamental to universities.
stephen_cagle · 4 months ago
I think this is partially something I have felt myself as well. It would be interesting if these lighter web search models would highlight the distinction between information that has been seen elsehwere vs information that is novel for each page? Like, a view that lets me look at the things that have been asserted and see how many of the different pages show those facts asserted (vs unmentioned vs contradicted).
simianwords · 4 months ago
There is a tradeoff here: the non search models are internally heavy but the search models are light but also depend on real data.

I keep switching between both but I think I'm starting to prefer the lighter one that is based on the sources instead.

killerstorm · 4 months ago
These models are still available: GPT-4.5, Gemini 2.5 Pro (at least the initial version - not sure if they optimized it away).

From what I can tell, they are pretty damn big.

Grok 4 is quite large too.

gerdesj · 4 months ago
"encyclopedic knowledge"

Have you just hallucinated that?

indigodaddy · 4 months ago
Pretty wild! I wonder how much high school teachers and college professors are struggling with the inevitable usage though?

"Do deep internet research and thinking to present as much evidence in favor of the idea that JRR Tolkein's Lord of the Rings trilogy was inspired by Mervyn Peake's Gormenghast series."

https://chatgpt.com/share/68bcd796-bf8c-800c-ad7a-51387b1e53...

sixtyj · 4 months ago
Did you check the facts? Did you click through all the links and see what the sources are?

A while ago I bragged at a conference about how ChatGPT had "solved" something... Yeah, we know, it's from Wikipedia and it's wrong :)

currymj · 4 months ago
the thing about students who cheat is most of them are (at least in the context of schoolwork) very lazy and don't care if their work is high quality. i would guess waiting multiple minutes for Thinking mode to give thorough results is very unappealing. 4o or 4o-mini was already good enough for their purposes.
wtbdbrrr · 4 months ago
Idea: workshops for teachers that teach them some kind of Socratic method that stimulates kids to support what they got from G with their own thinking, however basic and simple it may be.

Formulating the state of your current knowledge graph, that was just amplified by ChatGPT's research might be a way to offset the loss of XP ... XP that comes with grinding at whatever level kids currently find themselves ...

esafak · 4 months ago
I was amused that it used the neologism 'steel-man' -- redundantly, too.
IanCal · 4 months ago
I'm a bit confused, how is it redundant here? It's trying to make the best possible argument from one side that seems to be wrong. Instead of taking the argument at face value, it takes the most charitable understanding of it (not requiring that it happened before, but some parts where perhaps inspired during later revisions) and tries to argue that case.
meshugaas · 4 months ago
These answers take a shockingly long time to resolve considering you can put the questions into Brave search and get basically the same answers in seconds.
ignoramous · 4 months ago
The thing is, with Chat+Search you don't have to click various links, sift through content farms, or be subject to ads and/or accidental malware download.
dns_snek · 4 months ago
In practice this means that you get the same content farm answer dressed up as a trustworthy answer without even getting the opportunity to exercise better judgement. God help you if you rely on them for questions about branded products, they happily rephrase the company's marketing materials as facts.
apparent · 4 months ago
I like Brave but have found their search to be awful. The AI stuff seems decent enough, but the results populated below are just never what I'm looking for.
ekianjo · 4 months ago
With the walls of low quality sites optimized for SEO these days? Call me unconvinced
j_bum · 4 months ago
Is this the “Web Search”, “Deep Research”, or “Agent Mode” feature of ChatGPT?

Navigating their feature set is… fun.

simonw · 4 months ago
It's not the Deep Search or Agent Mode.

I select "GPT-5 Thinking" from the model picker and make sure its regular search tool is enabled.

j_bum · 4 months ago
Good to know, I’ll try to just use this a bit more then. I always opt for one of the above modes, with varying degrees of success.

Not sure if you tend to edit your posts, but it could be worth clarifying.

Btw — my colleagues and I all love your posts. I’ll quit fanboying now lol.

jonahx · 4 months ago
> This is excellent for satisfying curiosity, and occasionally useful for more important endeavors as well.

Small nit, Simon: satisfying curiosity is the important endeavor.

<3

650REDHAIR · 4 months ago
In my experience it’s “search Reddit and combine comments”.
dontdoxxme · 4 months ago
There are searches where that is the best way for a human to get the answer too. It can also search the Internet Archive if you ask for historical details, so does it not just do what a good human researcher would do?
yunohn · 4 months ago
I have a feeling this is just ChatGPT 5 in thinking mode, with web search enabled at a profile level at least. Even without that, any indication for recent data or research and thinking will prompt it to think+research quite a bit, ie deep research.
iguana2000 · 4 months ago
I believe this is just the normal mode. In my experience, you don't have to select the web search option to make it search the web. I wonder why they have web search as an option at this point (to force the llm to search?)
movedx01 · 4 months ago
Don't forget about the "ChatGPT 5 Pro" too :) which is a bit like Deep Research but not quite?