Readit News logoReadit News
Posted by u/davidkuennen 5 months ago
Ask HN: Do you still use search engines?
Today, I noticed that my behavior has shifted over the past few months. Right now, I exclusively use ChatGPT for any kind of search or question.

Using Google now feels completely lackluster in comparison.

I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.

How about you?

wavemode · 5 months ago
Search is primarily a portal - you know a particular resource exists, you just don't know its exact URL.

You hear about this new programming language called "Frob", and you assume it must have a website. So you google "Frob language". You hear that there was a plane crash in DC, and assume (CNN/AP/your_favorite_news_site) has almost certainly written an article about it. You google "DC plane crash."

LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.

Where LLMs will take over from search is when it comes to open-ended research - where you don't know in advance where you're going or what you're going to find. I don't really have frequent use cases of this sort, but depending on your occupation it might revolutionize your daily work.

Modified3019 · 5 months ago
IMO an example of a good use case for an LLM, which would be otherwise very hard to search for, is clarifying vague technical concepts.

Just yesterday I was trying to remember the name of a vague concept I’d forgotten, with my overall question being:

“Is there a technical term in biology for the equilibrium that occurs between plant species producing defensive toxins, and toxin resistance in the insect species that feed on those plants, whereby the plant species never has enough evolutionary pressure to increase it’s toxin load enough to kill off the insect that is adapting to it”

After fruitless searching around because I didn’t have the right things to look for, putting the above in ChatGPT gave an instant reply of exactly what I was looking for:

“Yes, the phenomenon you're describing is often referred to as evolutionary arms race or coevolutionary arms race.”

tofof · 5 months ago
While I do like LLMs for these tasks, unfortunately this one failed you but was a near enough miss that you couldn't see it. What you were really looking for is the Red Queen problem/hypothesis/race, named after a quote from Through the Looking Glass, with the Queen explaining to Alice: "Now, here, you see, it takes all the running you can do, to keep in the same place." In particular, the Red Queen term is specifically the equilibrium you inquired about, where relative fitness is unchanging, rather than the more general concept of an evolutionary arms race in which there can be winners and losers. The terms 'evolutionary equilibrium' and 'evolutionary steady state' are also used to capture the idea of the equilibrium, rather than just of competition.

Evolutionary arms race is somewhat tautological; an arms race is the description of the selective pressure applied by other species on evolution of the species in question. (There are other, abiotic sources of selective pressures, e.g. climate change on evolutionary timescales, so while 'evolution' at least carries a broader meaning, 'arms race' adds nothing that wasn't already there.)

That said, using your exact query on deepseek r1 and claude sonnet 3.7 both did include red queen in their answers, along with other related concepts like tit for tat escalation.

Cthulhu_ · 5 months ago
I just want to say that this thread / the responses to your question are better than either search engines or LLMs can ever come up with, and shows the truth of Cunningham's Law: "The best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer"

Or updated for the LLM age, "the best way to get the right answer from an LLM is not to ask it a question and use its answer; it's to post its response on a site of well-educated and easily nerdsniped people"

Slow_Hand · 5 months ago
Same. This is one of the few uses for LLMs that I actually find useful and that I trust.

They’re very helpful for helping me ask more refined questions by getting the terminology correct.

RyanOD · 5 months ago
Agreed. I'm increasingly using ChatGPT to research topics. In that way, I can refine my question, drill down, ask for alternatives, supply my own supplementary information, etc.

I think of AI as an intelligent search engine / assistant and, outside of simple questions with one very specific answer, it just crushes search engines.

paul7986 · 5 months ago
Present day...

Google 55% as GPT is not a local search engine

GPT 45% but use it for more intelligent learning/conversations/knowledgebase.

If I had a GPT phone ... sorta like H.E.R. the movie I would rarely leave my phone's lockscreen. My AI device / super AI human friend would do everything for me including get me to the best lighting to take the best selfies...

throwaway422432 · 5 months ago
Synthesizing what would be multiple searches into one prompt is where they can be really useful.

For example: Take the ingredient list of a cosmetic or other product that could be 30-40 different molecules and ask ChatGPT to list out what each of them is and if any have potential issues.

You can then verify what it returns via search.

Dead Comment

exe34 · 5 months ago
that sounds like a really great example of..... searching through vector embedding. I don't think the LLM part was technically necessary.
rstuart4133 · 5 months ago
I'd go further, and say I use search when I'm pretty confident I know the right search terms. If I don't, I'll type some wordy long explanation of what I want into an LLM and hope for the best.

The reason is pretty simple. If the result you want is in the first few search hits, it's always better. Your query is shorter so there is less typing, the search engine is always faster, the results are far better because you side step the LLM hallucinating as it regurgitates the results it remembers on the page your would have read if you searched.

If you aren't confident of the search times, it can take 1/2 an hour of dicking around with different terms, clicking though a couple of pages of search results for each set of term, until you finally figure out the lingo to use. Figuring out what you are really after from that wordy description is the inner magic of LLM's.

Al-Khwarizmi · 5 months ago
If the result you want is in the first few search hits, it's always better. Your query is shorter so there is less typing, the search engine is always faster, the results are far better because you side step the LLM hallucinating as it regurgitates the results it remembers on the page your would have read if you searched

Most often not true in the kind of searches I do. Say, I search for how to do something in the Linux terminal (not just the command, but the specific options to achieve a certain thing). Google will often take me to pages that do have the answer, but are full of ads and fluff, and I have to browse through several options until I find the ones I want. ChatGPT just gives me the answer.

And with any halfway decent model, hallucination only seems to be a problem in difficult or very specialized questions. Which I agree shouldn't be asked to LLMs (or not without verifying sources, at least). But over 90% of what I search aren't difficult or specialized questions, they're just things I have forgotten, or things that are easy but I don't know just because they're not in my area of expertise. For example as a learner of Chinese, I often ask it to explain sentences to me (translate the sentence, the individual words, and explain what a given word is doing in the sentence) and for that kind of thing it's basically flawless, there's no reason why it would hallucinate as such questions are trivial for a model having tons of Chinese text in its training set.

jfim · 5 months ago
It depends. Sometimes webpages are useful, but at the same time navigating the amount of fluff on webpages nowadays takes longer than asking a LLM.

I asked Claude to give me a recipe that uses mushrooms and freezes well and it give me a decent looking soup recipe. It might not be the best soup ever, but it's soup, kinda hard to mess up. The alternative would be to get a recipe from the web with a couple dozen paragraphs about how this is the bestest soup ever and it comes from their grandma and reminds them of summer and whatnot.

tasuki · 5 months ago
> I'll type some wordy long explanation of what I want into an LLM and hope for the best.

Interesting, I just random words. LLM not care sentence.

deadbabe · 5 months ago
We’re currently in the golden age of LLMs as search engines. Eventually they’ll subtly push products and recommendations in their output to steer you toward specific things.
HellDunkel · 5 months ago
You mean like the golden age of speech recognition a couple of years ago when they claimed 80% of computer interfacing will be voice only?
red-iron-pine · 5 months ago
by "eventually" they mean "at this rate, mid-next year, if not already"
keithnz · 5 months ago
have you tried chatgpt search? you can do "DC plane crash" or "Frob" it will come up with links to the story, but it will quickly give you a summary with links to its sources. Best thing is you can follow up with questions.
wavemode · 5 months ago
Yes, I have. If I want something to read the page for me, then that's where LLMs come in.

But what I'm talking about is when I want to read the page for myself. Waste of time to have to wait for an LLM to chew on it.

npilk · 5 months ago
Agreed. I think of these as two different types of searches - “page searches” where you know a page exists and want to get to it, and “content searches” where you have a more open-ended question.

Really, for many “page searches”, a good search engine should just be able to take you immediately to the page. When I search “Tom Hanks IMDB”, there’s no need to see a list of links - there’s obviously one specific page I want to visit.

https://notes.npilk.com/custom-search

Y_Y · 5 months ago
> Really, for many “page searches”, a good search engine should just be able to take you immediately to the page.

Are you feeling lucky?

sodality2 · 5 months ago
This is one of my favorite duckduckgo features - adding a single exclamation point after a search (“Tom Hanks IMDB !”) does exactly this
generalizations · 5 months ago
It's been pretty cool to realize that Grok 3 actually prioritizes up-to-date information: I have actually used it for both kinds of your examples, and it worked.
exodust · 5 months ago
Still use Google for quick generic lookups, spell-checks/definitions, shopping stuff, products, or things I know will return a good result.

Grok is great for finding details and background info about recent news, and of course it's great for deep-diving on general knowledge topics.

I also use Grok for quick coding help. I prefer to use AI for help with isolated coding aspects such as functions and methods, as a conversational reference manual. I'm not ready to sit there pretending to be the "pilot" while AI takes over my code!

For the record, I do not like Google's AI generated results it spams at me when I search for things. I want AI when I choose to use AI, not when I don't choose it. Google needs a way to switch that off on the web (without being logged in).

desipenguin · 5 months ago
Agree 100% I tried perplexity to "search" My use case was similar to one described above.

I know what I'm looking for. I just need exact URL.

Perplexity miserably fails at this.

dumbfounder · 5 months ago
Yes they will. Why do you think they won’t? They certainly can. You just use RAG to look up the latest news based on the keywords you are using. You can use search on the back end and never surface a list of results unless the LLM decides that is a good idea. It curates that the reusits for you. Or gives you the singular site you need with context. That is better for most searches.
bmcahren · 5 months ago
I am going to cite you in a decade. Already today ChatGPT is _far_ better than Google. Instead of finding a keyword optimized page for "frob language", I can get the objectively best sources for frob language and even find the best communities related to it. Zero frob ads, zero frob-optimized pages that are designed to trick google, etc.

Traditional search is dead, semantic search through AI is alive and well.

I can't yet count once AI misunderstood the meaning of my search while Google loves to make assumptions, rewrite my search query, and deliver the results that pay it the best which have the best ads (in my opinion as a lifetime user).

Lets not even mention how they willingly accept misleading ads atop the results which trick the majority of common users into downloading malware and adware on the regular.

crowcroft · 5 months ago
Yea 'needle in a haystack' style search is something that LLM based search is simply not as good at.

The reason Google is still seeing growth (in revenue etc.) is that for a lot 'commercial' search still ends with this kind of action.

Take purchasing a power drill for example, you might use an LLM for some research on what drills are best, but when you're actually looking to purchase you probably just want to find the product on Home Depot/Lowe's etc.

kiney · 5 months ago
LLMs already replaced that news example for me. Especially grok is really good at summarizing the state of reporting for current events like plane crashes

Deleted Comment

FloorEgg · 5 months ago
Except when search engines bury the thing you're obviously looking for under an entire page of sponsored ads, then that convenience argument starts to not hold up as well...
ptmcc · 5 months ago
If LLMs aren't already doing this, they certainly will soon. And it'll be even more insidious and "invisible" than sponsored search results.
Rayhem · 5 months ago
Except when LLM providers bury the thing you're obviously looking for under an entire page of sponsored ads (buy Diet Coke™!), then that convenience argument starts to not hold up as well...
krferriter · 5 months ago
When I search 'dc plane crash' in Google, Bing, and DuckDuckGo I don't get results buried under ads. When I search 'air conditioner for sale' I do get ads at the top of each of those, but that's more okay because I am looking for things to buy. And it's easy to look past the ads to get to sites like home depot or other traditional retailers that come up not just because they purchased ad space.
piva00 · 5 months ago
Paying for ad-free search engines do exist as an alternative, sucks a lot for the ones who cannot afford such a luxury but at some point I noticed that for my life search is quite important, both personally and professionally, so I haven't minded paying for it after the free experience provided by Google, Bing, etc. started worsening.
coryrc · 5 months ago
Do you not use an ad blocker?
shpx · 5 months ago
install uBlock Origin
0x0203 · 5 months ago
And most of the actual results from said search are nothing but LLM generated slop content that provide zero utility to the user and only exist to capture as many clicks and traffic as possible so they can shovel as many ads as possible. The current web is practically unusable.

Dead Comment

moralestapia · 5 months ago
>LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.

What? On Planet Earth, this is already a thing.

mr_toad · 5 months ago
> Search is primarily a portal - you know a particular resource exists, you just don't know its exact URL.

Kind of like a manual, with an index.

RTFM people.

coldtea · 5 months ago
>LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.

Sounds trivial to integrate an LLM front end with a search engine backend (probably already done), and be able to type "frob language" and it gives you a curated clickable list of the top resources (language website, official tutorial, reference guide, etc) discarding spam and irrelevant search engine results in the process.

coldpie · 5 months ago
That's called a search engine. We've had them for 30 years. Ahhhhhhhhhhhghh. It's blockchain all over again.

Deleted Comment

tremarley · 5 months ago
If you wanted to know more about a new programming language named “Frob” or a plane crash that happened today, couldn’t you use an LLM like grok?

Or any other LLM that’s continuously trained on trending news?

coldpie · 5 months ago
How do I know the LLM isn't lying to me? AIs lie all the time, it's impossible for me to trust them. I'd rather just go to the actual source and decide whether to trust it. Odds are pretty good that a programming language's homepage is not lying to me about the language; and I have my trust level for various news sites already calibrated. AIs are garbage-in garbage-out, and a whole boatload of garbage goes into them.
simonw · 5 months ago
None of the LLMs (not even Grok) are "continuously trained" on news. A lot of them can run searches for questions that aren't handled by their training data. Here's Grok's page explaining that: https://help.x.com/en/using-x/about-grok

> In responding to user queries, Grok has a unique feature that allows it to decide whether or not to search X public posts and conduct a real-time web search on the Internet. Grok’s access to real-time public X posts allows Grok to respond to user queries with up-to-date information and insights on a wide range of topics.

lurking_swe · 5 months ago
i can also use my human brain to read a webpage from the source, as the authors intended. not EVERY question on this planet needs to be answered by a high resource intensive LLM. Energy isn’t free you know. :)

Other considerations:

- Visiting the actual website, you’ll see the programming languages logo. That may be a useful memory aide when learning.

- The real website may have diagrams and other things that may not be available in your LLM tool of choice (grok).

- The ACT of browsing to a different web page may help some learners better “compartmentalize” their new knowledge. The human brain works in funny ways.

- i have 0 concerns of a hallucination when readings docs directly from the author/source. Unless they also jumped on the LLM bandwagon lol.

Just because you have a hammer in your hand doesn’t mean you should start trying to hammer everything around you friend. Every tool has its place.

eddd-ddde · 5 months ago
It's just a different kind of data. Even without LLMs, sometimes I want a tutorial, sometimes I want the raw API specification.

For some cases I absolutely prefer an LLM, like discoverability of certain language features or toolkits. But for the details, I'll just google the documentation site (for the new terms that the LLM just taught me about) and then read the actual docs.

lcnPylGDnU4H9OF · 5 months ago
Yes, you can use grok but you could also use a search engine. Their point is that grok would be less convenient than a search engine for the use case of finding Frob's website's homepage.
oofbey · 5 months ago
Perplexity solves this problem perfectly for me. It does the web search, reads the pages, and summarizes the content it found related to my question. Or if it didn't find it, it says that.

I recently configured Chrome to only use google if I prefix my search with a "g ".

Okawari · 5 months ago
I still prefer tranditional search engines over LLMs but I admit, its results feels worse than it has traditionally.

I don't like LLMs for two reasons:

* I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.

* I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.

If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.

SoftTalker · 5 months ago
One thing I don't like about LLMs is that they vomit out a page of prose as filler around the key point which could just be a short sentence.

At least that has been my experience. I admit I don't use LLMs very much.

wruza · 5 months ago
It's time to bind "Please be concise in your answer and only mention important details. Use a single paragraph and avoid lists. Keep me in the discussion, I'll ask for details later." to F1.
mips_avatar · 5 months ago
One problem with LLMs is that the amount of "thinking" they do when answering a question is dependent on how many tokens they use generating the answer. A big part of the power of models like deepseek R1 is they figured out how to get a model to use a lot of tokens in a logical way to work towards solving a problem. The models don't know the answer they come to it by generating it, and generating more helps them. In the future we'll probably see the trend continue where the model generates a "thinking" response first, then the model summarizes the answer concisely.
graemep · 5 months ago
> I can't really get a feel for the veracity of the information without double checking it.

This is my main reason for not using LLMs as a replacement for search. I want an accurate answer. I quote often search for legal or regulatory issues, health, scientific issues, specific facts about lots of things. i want authoritative sources.

Froedlich · 5 months ago
LLMs remind me of the children's game "Telephone."
supportengineer · 5 months ago
Am I the only one who double checks all of the information presented to me, from any source?
da_chicken · 5 months ago
No you don't. If you were doing that you wouldn't have time to eat, let alone sleep.

You check the information you decide should be verified.

ryandrake · 5 months ago
Unless someone's life is on the line, usually eyeballing the source URL is enough for me. If I'm looking for API documentation, there are a few well-known URLs I trust as authoritative. If I'm looking for product information, same thing. If the search engine points me to totallyawesomeproductleadgen19995.biz, I'm probably not getting reliable information.

An LLM response without explicit mention of its provenance... There's no way to even guess whether it is authoritative.

pdabbadabba · 5 months ago
If what you say is literally true: yes, I think you probably are the only one!
theamk · 5 months ago
Wait, so if you go to python.org and the doc page says, "Added in version 3.11", you double-check this?

What do you even use for double-check? Some random low-quality content farm? A glitchy LLM? An dodgy mirror of official docs full of ads? Or do you actually dig the source code for this?

And do you keep double-checking with all other information on the page... "A TOMLDecodeError will be raised on an invalid TOML document." - are you going to start an interactive session and check which error will be raised?

npoc · 5 months ago
How deep do you go? Where do you stop?

Just because you can find multiple independent sources saying the same thing doesn't mean it's correct.

worik · 5 months ago
Are you sure? If you only say it once...

"What I tell you three times is true"

__d · 5 months ago
No.

Part of why I prefer to use a search engine is that I can see who is saying it, in what context. It might be Wikipedia, but also CIA world fact book. Or some blog but also python.org.

Or (lately) it might be AI SEO slop, reworded across 10 sites but nothing definitive. Which means I need to change my search strategy.

I find it easier (and quicker) to get to a believable result via a search engine than going via ChatGPT and then having to check what it claims.

leptons · 5 months ago
>A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.

And this is how LLMs perform when LLM-rot hasn't even become widely pervasive yet. As time goes on and LLMs regurgitate into themselves, they will become even less trustworthy. I really can't trust what an LLM says, especially when it matters, and the more it lies, the more I can't trust them.

bluGill · 5 months ago
I find LLMs useful for the case where I'm not sure what the right terms are. I can describe something and the LLM gives me a term which I then type into a search engine to get more information. I'm only starting to use LLMs though, so maybe I'll use them more in the future? - only time will tell.
miloignis · 5 months ago
Yes, I use search engine(s) constantly - namely Kagi, which really does feel like Google used to. I tried using LLMs for a recent project of mine when I was trying to figure out if something was possible, and they were actively misleading, every time. My issue for this project was that what I was asking for did end up not being currently possible, but LLMs wouldn't tell me that and would make up incorrect ways to solve my problem, since they didn't want to tell me it couldn't be done.

Really, these days, either I know some resource exists and I want to find it, in which case a search engine makes much more sense than an LLM which might hallucinate, or I want to know if something is possible / how to do it, and the LLM will again hallucinate an incorrect way to do it.

I've only found LLMs useful for translation, transcription, natural language interface, etc.

NelsonMinar · 5 months ago
My experience too. The problem isn't search, it's Google. Kagi really is very useful. I use LLMs for some things but still lots of Kagi search.
marvinblum · 5 months ago
It's the same for me. I've switched to DuckDuckGo about 2 or 3 years ago and it feels like Google used to. I'm always shocked to see how bad the results are and how cluttered the top section is on Google if I happen to search there on someone else's computer.

LLMs have mostly been useful for three things: single line code completion (in GoLand), quickly translating JSON, and generating/optimizing marketing texts.

averageRoyalty · 5 months ago
Agreed, for resource location, Kagi feels like Google did 20 years ago.

I use LLMs as a sounding board. Often if I'm trying to tease out the shape of a concept in my head, it's best to write it out. I now do this in the form of a question or request for information and dump it into the LLM.

jlbang · 4 months ago
Fully agreed. I pay for Kagi monthly now, and it's totally worth it. I really hope they grow and become more well known because they doing what one of the biggest companies in the world is doing, and doing it better, and it seems so few people even know about them.
star-glider · 5 months ago
This is my favorite thing about Kagi; you can do both. If you just append a question mark, it'll run the search through a simple LLM and give you those results (with citations) right before standard search. From there, you can proceed into a more sophisticated frontier model if that's more effective.

"Search" can mean a lot of things. Sometimes I just want a website but can't remember the URL (traditional); other times I want an answer (LLMs); and other times, I want a bunch of resources to learn more (search+LLMs).

sshine · 5 months ago
And sometimes you have all the data, but it’s too much, so you ask for a summary and ask elaborating questions.
dharmab · 5 months ago
I've found this ineffective for anything where I need a factual answer and brilliantly where I need the vibe of subjective or fictional things.

Bad: summarizing scientific research or technical data

Great: finding travel ideas or clarifying aspects of a franchise's fictional universe.

bayindirh · 5 months ago
I use Kagi exclusively and refuse to offload my brain to a thing which has no accuracy guarantee ever. The emitted answers to the queries it has given can be completely bogus, and developers of these things low key expect me to believe what their black box say? Nah, never.

Instead I use a search engine and do my own reading and filtering. This way I learn what I'm researching, too, so I don't fall into the vicious cycle of drug abu ^H^H^H^H^H laziness. Otherwise I'll inevitably rely more on more on that thing, and be a prisoner of my own doing by increasingly offloading my tasks to a black box and be dependent on it.

drpixie · 5 months ago
100% agree.

Google recently (unrequested) provided me with very detailed AI generated instructions for server config - instructions that would have completely blown away the server. There will be someone out there who just follows the bouncing ball, I hope they've got good friends, understanding colleagues, and good backups!

Deleted Comment

tasuki · 5 months ago
> I use Kagi exclusively and refuse to offload my brain to a thing which has no accuracy guarantee ever.

What a weird sentence. What accuracy guarantees does Kagi have? Or, if you're not "offloading your brain to it", can't you do the same with an LLM?

bayindirh · 5 months ago
Kagi is a search engine, which is developing its own index, and use the other indexes to give a more comprehendsive result if their own index doesn't fulfill the query you made.

Moreover, Kagi is a paid service. It has no ads, no hidden ranking, nothing to earn money by manipulating you. On the contrary you, the user, can add filters and ranking modifiers to promote the sites you find to be useful/truthful and demote others which push slop and SEO optimized content to your eyeballs. This is per user, and is not meddled with.

This makes Kagi very deterministic (unlike LLMs), very controllable (unlike LLMs), and very personalized (unlike LLMs). Moreover, Kagi gives you ~20 results or so per search, and no fillers (again, unlike LLMs).

I don't use Kagi's AI assistance features, and I don't pay for the "assistant" part of it, either.

I don't offload my brain to Kagi, because I don't prompt it until it gives me something I like. Instead, I get the results, read them, learn what I'm looking for, and possibly document what I got out from that research. This usage pattern, is again very different than prompting an LLM until it gives you something somewhat works or sounds plausible.

I do the hard work of synthesizing and understanding the answer. Not reading some slop and accepting it at face value.

EliasWatson · 5 months ago
Google results have gotten so terrible over the years. I switched to Kagi long ago and haven't looked back. Whenever I use Google on another computer, I'm shocked by how awful the results are compared to Kagi.

As for AI search, I do find it extremely useful when I don't know the right words to search for. The LLM will instantly figure out what I'm trying to say.

sshine · 5 months ago
Probably 70% of my searches are FastGPT searches, meaning I end my search query with a ‘?’ and Kagi summarises the results, so I don’t need to click.

And the ratio between using search engine and Kagi’s LLM agent with search is still 70% search. Sometimes, searching is faster, sometimes asking AI is faster.

jacobmarble · 5 months ago
Same. I switched to Kagi over a year ago, and now every other search engine looks like a steaming pile of ads and slop.
tiborsaas · 5 months ago
I'm the inverse, I still 90% of the time use search engines, mostly Google. LLM-s can't help me with researching Hungarian companies offering screws, furniture, TV-s etc I need for my home renovation. It can't find me the best route to go to a cafe, lookup users, find information on famous people. Google is also faster than me typing a good prompt.

I use LLM-s for what they are good at, generative stuff. I know some task take me a long time and I can shortcut with LLM-s easily.

So here's a ChatGPT example query* which is completely off:

https://chatgpt.com/share/67f5a071-53bc-8013-9c32-25cc2857e5...

* It's intentionally bad be able to compare with Google.

And here's the web result, which is spot on:

https://imgur.com/a/6ELOeS1

zer00eyz · 5 months ago
My take is close to yours...

LLM's are great when you want AN answer, and not get side tracked.

Search is great when you want to know what answers are out there. The best example is Recipes... From what spices go into chai to the spice mix in any given version of chili (let's not start on beans).

The former is filling in missing knowledge the latter is learning.

keithnz · 5 months ago
this is what I put in chat gpt..... ie, exactly what you put in google.

https://imgur.com/a/boNS2YZ

https://chatgpt.com/share/67f5a9f9-f0a8-800d-9101-aafb88e455...

which I think is way better than google.

tiborsaas · 5 months ago
It does a web search so you got more lucky. I searched for it because I found it on the product page of the manufacturer and I was interested in finding a place to buy it.

Google offered me a few hits with existing businesses, with ChatGPT I need to do another query.

Out of curiosity I tried it and it did take me to a wholesale company (single result), but the Google results are better with cheaper options (multiple good results), I can also parse the list faster with my eye.

Sure, I can just write a better prompt:

https://chatgpt.com/share/67f5b09b-c154-8013-840f-934af8302f...

This is my third attempt to get it right, but it found me one which I haven't seen before. However I would still do a Google search to be thorough and get the best deal.

yellowapple · 5 months ago
LLMs are still notorious for hallucination; last I checked ChatGPT in particular still hallucinates about 1/3rd of the time.

So yeah, I do still use search engines, specifically Kagi and (as a fallback) DuckDuckGo. From either of them I might tack on a !g if I'm dissatisfied with the results, but it's pretty rare for Google's results to be any better.

When I do use an LLM, it's specifically for churning through some unstructured text for specific answers about it, with the understanding that I'll want to verify those answers myself. An LLM's great for taking queries like "What parts of this document talk about $FOO?" and spitting out a list of excerpts that discuss $FOO that I can then go back and spot-check myself for accuracy.