The same AI search results that recommend that their users eat glue or kill themselves.
I don't believe this is a right move for Google, but Google does not care about that. Google cares about showing ads, and I'd love to know how they plan on doing that, when everybody will google a term, read the (terrible) AI description and close the tab. Will the ads be injected directly into the responses? Will this be fairly and transparently disclosed?
There was a running theory that google results have detoriated in quality because google wants you to scroll down, to look at more ads. This runs contrary to this expected behavior, so I don't know what to think. I might have to start looking for an alternative.
I really like Ecosia, but if anybody has any recommendations, do tell.
I assume the ads could be to replace the first website by an ad.
With LLM, it's also very possible that the ad get injected directly into the result.
For instance, searching for 'is car insurance obligatory?' would return the usual paragraph about yes and why and then tell you insuranceX has pretty good car insurance pricing and reputation.
Any recommendations for search engines that do not get rid of results that violate DMCA? You know, "watch X free online" used to give me all sorts of useful websites, now it typically does not or it takes longer to find them.
Of course if you know where to look, it does not matter as much, and I do, but still. :P
SOTA ChatGPT models have a hallucinaton rate of 37.1%! [1]
I can't say that I have not used it in similar ways though, and it might be very often the best alternative. I just worry that in this age of misinformation, many incorrect 'facts' will be absorbed blindly into these models and spewed forth.
This is the wrong move. People are starting to get used to their AI results even that many times inaccurate. I get wrong out dated results all the time, when I look for coding and documentation questions I’d the same thing.
>Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine one of the top results for cellular phone is "The Effect of Cellular Phone Use Upon Driver Attention", a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web [Page, 98]. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. *For this type of reason and historical experience with other media [Bagdikian 83], we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.*
>[...]
>In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
Imagine an LLM that answers your queries. Amazing!
Now imagine there's a separate add LLM that inserts seamless ads in the first LLM's answers. You'd almost never be able to tell actual search results from ads.
Native advertising taken to the Nth degree. There are advertising execs drooling when they hear stuff like this.
Can anyone explain Google's moves here? Pre-2024, it made sense. LLMs seem like a game changer technology that could threaten Google Search. But in 2024 and onwards, we know that's not the case as we are past peak LLM. Adding LLMs unreliability to their main money maker seems awfully misguided. Are they under investor pressure to add LLMs everywhere?
I think the LLMs are merely a method. Google really doesn't want you to leave the Google ecosystem. Hence why they have maps, flights, news, shopping, ...
LLMs allow them to extract information from websites and present those to you, while surrounded by only their ads.
And once that is commonplace somebody will start to add 'Fetch relevant products related to this question from our database, and include them in a natural way into the answer.
I don't believe this is a right move for Google, but Google does not care about that. Google cares about showing ads, and I'd love to know how they plan on doing that, when everybody will google a term, read the (terrible) AI description and close the tab. Will the ads be injected directly into the responses? Will this be fairly and transparently disclosed?
There was a running theory that google results have detoriated in quality because google wants you to scroll down, to look at more ads. This runs contrary to this expected behavior, so I don't know what to think. I might have to start looking for an alternative.
I really like Ecosia, but if anybody has any recommendations, do tell.
https://cdn.arstechnica.net/wp-content/uploads/2025/03/AI-Mo...
I assume the ads could be to replace the first website by an ad.
With LLM, it's also very possible that the ad get injected directly into the result.
For instance, searching for 'is car insurance obligatory?' would return the usual paragraph about yes and why and then tell you insuranceX has pretty good car insurance pricing and reputation.
Using an LLM, they can set up the bot-character to dynamically recommend, persuade, and prime viewers directly.
A subtler version of the Truman Show model: https://m.youtube.com/watch?v=BCJyGy6AFJo
Of course if you know where to look, it does not matter as much, and I do, but still. :P
I just want the old Google experience.
I think the first paragraph while true is not indicative of anything since it can be fixed and improved
I can't say that I have not used it in similar ways though, and it might be very often the best alternative. I just worry that in this age of misinformation, many incorrect 'facts' will be absorbed blindly into these models and spewed forth.
(edit - wrong source, my bad) [1] https://nypost.com/2025/02/28/business/sam-altmans-openai-la...
Deleted Comment
These days i use duckduckgo and perplexity most of the time.
Yesterday I searched google for more information on the common house centipede, in my current location.
The AI summary was helpful, but the image it chose was of a centipede with a woman's head. https://howanimalsdoit.wordpress.com/wp-content/uploads/2011...
So I think maybe they need to let it sit in the oven a little bit longer.
>[...]
>In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
Imagine an LLM that answers your queries. Amazing!
Now imagine there's a separate add LLM that inserts seamless ads in the first LLM's answers. You'd almost never be able to tell actual search results from ads.
Native advertising taken to the Nth degree. There are advertising execs drooling when they hear stuff like this.
LLMs allow them to extract information from websites and present those to you, while surrounded by only their ads.
And once that is commonplace somebody will start to add 'Fetch relevant products related to this question from our database, and include them in a natural way into the answer.