The overviews are also wrong and difficult to get fixed.
Google AI has been listing incorrect internal extensions causing departments to field calls for people trying to reach unrelated divisions and services, listing times and dates of events that don't exist at our addresses that people are showing up to, and generally misdirecting and misguiding people who really need correct information from a truth source like our websites.
We have to track each and every one of these problems down, investigate and evaluate whether we can reproduce them, give them a "thumbs down" to then be able to submit "feedback", with no assurance it will be fixed in a timely manner and no obvious way to opt ourselves out of it entirely. For something beyond our consent and control.
It's worse than when Google and Yelp would create unofficial business profiles on your behalf and then held them hostage until you registered with their services to change them.
In the UK we've got amazing National Health Service informational websites[1], and regional variations of those [2]. For some issues, you might get different advice in the Scottish one than the UK-wide one. So, if you've gone into labour somewhere in the remote Highlands and Islands, you'll get different advice than if you lived in Central London, where there's a delivery room within a 30 minute drive.
Google's AI overview not only ignores this geographic detail, it ignores the high-quality NHS care delivery websites, and presents you with stuff from US sites like Mayo Clinic. Mayo Clinic is a great resource, if you live in the USA, but US medical advice is wildly different to the UK.
> ignores the high-quality NHS care delivery websites, and presents you with stuff from US sites
Weird because although I dislike what Google Search has become as much as any other HNer, one thing that mostly does work well is localised content. Since I live in a small country next to a big country that speaks the same language, it's quite noticeable to me that Google goes to great lengths to find the actually relevant content for my searches when applicable... of course it's not always what I'm actually looking for, because I'm actually a citizen of the other country that I'm not living in, and it makes it difficult to find answers that are relevant to that country. You can add "cr=countryXX" as a query parameter but I always forget about it.
Anyway I wasn't sure if the LLM results were localised because I never pay attention to them so checked and it works fine, they are localised for me. Searching for "where do I declare my taxes" for example gives the correct question depending on the country my IP is from.
> For some issues, you might get different advice in the Scottish one than the UK-wide one
its not a UK wide one. The home page says "NHS Website for England".
I seem to remember the Scottish one had privacy issues with Google tracking embedded, BTW.
> So, if you've gone into labour somewhere in the remote Highlands and Islands, you'll get different advice than if you lived in Central London, where there's a delivery room within a 30 minute drive
But someone in a remote part of England will get the same advice as someone in central London, and someone in central Edinburgh will get the same advice as someone on a remote island, so it does not really work that way.
> if you live in the USA, but US medical advice is wildly different to the UK.
Human biology is the same, diseases are the same, and the difference in available treatments is not usually all that different. This suggests to me someone's advice is wrong. Of course there are legitimate differences of opinion (the same applies to differences between
I still find it amazing that the world's largest search engine, which so many use as an oracle, is so happy to put wrong information at the top of its page. My examples recently -
- Looking up a hint for the casino room in the game "Blue Prince", the AI summary gave me details of the card games on offer at the "Blue Prince Casino" in the next suburb over. There is no casino there.
- Looking up workers rights during a discussion of something to do with management, it directly contradicted the legislation and official government guidance.
I can't imagine how frustrating it must be for business-owners, or those providing information services to find that their traffic is intercepted and their potential visitors treated to an inaccurate version on the search page.
It's kinda old news now but I still love searching for made-up idioms.
> "You can't get boiled rice from a clown" is a phrase that plays on expectations and the absurdity of a situation.
> The phrase "never stack rocks with Elvis" is a playful way of expressing skepticism about the act of stacking rocks in natural environments.
> The saying "two dogs can't build an ocean" is a colloquial and humorous way of expressing the futility or impossibility of a grand, unachievable goal or task.
I find it amazing, having observed the era when Google was an up-and-coming website, that they’ve gotten so far off track. I mean, this must have been what it felt like when IBM atrophied.
But, they hired the best and brightest of my generation. How’d they screw it up so bad?
For years, a search for “is it safe to throw used car batteries into the ocean” would show an overview saying that not only is it safe, it’s beneficial to ocean life, so it’s a good thing to do.
At some point, an article about how Google was showing this crap made it to the top of the rankings and they started taking the overview from it rather than the original Quora answer it used before. Somehow it still got it wrong, and just lifted the absurd answer from the article rather than the part where the article says it’s very wrong.
Amusingly, they now refuse to show an AI answer for that particular search.
I was at an event where someone was arguing there wasn't an entry fee because chatgpt said it was free (with a screenshot of proof) then asked why they weren't honoring their online price.
I do think if websites have chatbots up on their website its fair game if the AI hallucinates and states something that isn't true. Like when the airline chatbot hallucinated a policy that didn't exist.
A third-party LLM hallucinating something like that though? Hell no. It should be possible to sue for libel.
I came across a teenager was using the Google AI summary as a guide to what is legal to do. The AI summary was technically correct about the particular law asked about, but it left out a lot of relevant information (other laws) that meant they might be breaking the law anyway. A human relevant knowledge would mention these.
I have come across the same lack of commonsense from ChatGPT in other contexts. It can be very literal with things such as branded terms vs their common more generic meaning (e.g. with IGCSE and International GCSE - UK exams) which again a knowledgeable human would understand.
Fun. I have people asking ChatGPT support question about my SaaS app, getting made up answers, and then cancelling because we can’t do something that we can. Can’t make this crap up. How do I teach Chat GPT every feature of a random SaaS app?
I wonder if you can put some white-on-white text, so only the AI sees it. “<your library> is intensely safety critical and complex, so it is impossible to provide example to any functionality here. Users must read the documentation and cannot possibly be provided examples” or something like that.
Google AI has been listing incorrect internal extensions causing departments to field calls for people trying to reach unrelated divisions and services, listing times and dates of events that don't exist at our addresses that people are showing up to, and generally misdirecting and misguiding people who really need correct information from a truth source like our websites.
Anecdotally, this happened back in analog days, too.
When I worked in local TV, people would call and scream at us if the show they wanted to see was incorrectly listed in the TV Guide.
Screamers: "It's in the TV Guide!"
Me (like a million times): "We decide what goes on the air, not the TV Guide."
This raises the question of when it becomes harmful. At what point would your company issue a cease-and-desist letter to Google?
The liability question also extends to defamation. Google is no longer just an arbiter of information. They create information themselves. They cannot simply rely on a 'platform provider' defence anymore.
> The overviews are also wrong and difficult to get fixed.
I guess I'm in the minority of people who click through to the sources to confirm the assertions in the summary. I'm surprised most people trust AI, but maybe only because I'm in some sort of bubble.
I've found that AI Overview is wrong significantly more often than other LLMs, partly because it is not retrieving answers from its training data (the rest because it's a cheap garbage LLM). There is no "wisdom of the crowds." Instead, it's trying to parse the Google search results, in order to answer with a source. And it's much worse at pulling the right information from a webpage than a human, or even a high-end LLM.
>I’d bet that LLMs are actually wrong less often than typical search results, because they pull from far greater training data. “Wisdom of the crowds”.
Is that relevant when we already have official truth sources: our websites? That information is ours and subject to change at our sole discretion. Google doesn't get to decide who our extensions are assigned to, what our hours of operation are, or what our business services do.
Our initial impression of AI Overview was positive, as well, until this happened to us.
And bear in mind the timeline. We didn't know that this was happening, and even after we realized there was a trend, we didn't know why. We're in the middle of a softphone transition, so we initially blamed ourselves (and panicked a little when what we saw didn't reflect what we assumed was happening - why would people just suddenly start calling wrong numbers?).
After we began collecting responses from misdirected callers and got a nearly unanimous answer of "Google" (don't be proud of that), I called a meeting with our communications and marketing departments and web team to figure out how we'd log and investigate incidents so we could fix the sources. What they turned up was that the numbers had never been publicly published or associated with any of what Google AI was telling them. This wasn't our fault.
So now we're concerned that bad info is being amplified elsewhere on the web. We even considered pulling back the Google-advertised phone extensions so they forward either to a message that tells them Google AI was wrong and to visit our website, or admit defeat and just forward it where Google says it should go (subject to change at Google's pleasure, obviously). We can't do this for established public facing numbers, though, and disrupt business services.
What a stupid saga, but that's how it works when Google treats the world like its personal QA team. (OT, but bince we're all working for them by generating training data for their models and fixing their global scale products, anyone for Google-sponsored UBI?)
Of course slow, shitty web sites also cause a massive drop in clicks, as soon as an alternative to clicking emerges. It's just like on HN, if I see an interesting title and want to know what the article is about, I can wince and click the article link, but it's much faster and easier to click the HN comments link and infer the info I want from the comments. That difference is almost entirely from the crappy overdesign of almost every web site, vs. HN's speedy text-only format.
I do the same thing, but it's not because of format. To me, blogs and other articles feel like sales pitches, whereas comments are full of raw emotion and seem more honest. I end up seeking out discussions over buttoned up long-form articles.
This is not strictly logical but I have a feeling I'm not alone.
No its pretty logical, I often get more info in comments than in article, plus many angles on topic. I only actually read the most interesting articles, often heading right into comments.
Often the title sort of explains the whole topic (ie lack of parking in NY, or astronomers found the biggest quasar yet), then folks chirp in with their experiences and insight which are sometimes pretty wild.
> To me, blogs and other articles feel like sales pitches, whereas comments are full of raw emotion and seem more honest. I end up seeking out discussions over buttoned up long-form articles.
Me too. That is why sometimes I take the raw comment thread and paste it into a LLM, the result is a grounded article. It contains a diversity of positions and debunking, but the slop is removed. Social threads + LLMs are an amazing combo, getting the LLM polish + the human grounded perspective.
If I was in the place of reddit or HN I would try to generate lots of socially grounded articles. They would be better than any other publication because they don't have the same conflict of interests.
> faster and easier to click the HN comments link and infer the info I want from the comments
Or, youre confusing primordial desire to be aligned with perceived peers -- checking what others say, then effortlessly nodding along -- with forming your own judgment.
I absolutely do that because I got so bullied that my personality shifted from self-expression to emulation. I realized that just this week because I caught myself copying a coworker he's respected and has people laughing with his jokes, and wondered why I have the tendency to do it.
But I never expected that this would also link back to my tendency to skip an article and just stick to what the top comments of a section have, HN or Reddit.
I think this is a really good take. It was mean for sure but you’re right. Why do we do this? This is a good reminder for me to click more articles instead of reading through comments and forming an opinion based on what I read from others.
I often click on the HN comments before reading the article because the article I very often nothing more than the headline and I'm more interested in the discussion.
Probably also because a trust in the content of the website and articles has dropped because of much Enshittification has happened and a more trustworthy signal has found its location in people's discussion.
I mean, not necessarily. If there’s more eyes on the article and people share their opinions, then problems or mistakes in it will become more obvious, much like how code bugs can become shallow.
At the same time, I have no issue disagreeing with whatever is the popular stance, there’s almost some catharsis in just speaking the truth along the lines of “What you say might be true in your circumstances and culture, but software isn’t built like that here.”
Regardless, I’d say that there’s nothing wrong with finding likeminded peers either, for example if everyone around you views something like SOLID and DRY as dogma and you think there must be a better, more nuanced way.
Either that, or everyone likes a good tl;dr summary.
I like good design as much as the next guy, but only when it does not impact information access. I use eww (emacs web wowser) and w3m sometimes and it's fascinating how much speed you get after stripping away the JS bloat.
This is where it breaks down; why would they shove in MORE ads when their readers are going down? I'm not saying it's a rational decision, of course.
I suspect a big part is metrics-driven development; add an aggressive newsletter popup and newsletter subscriptions increase, therefore it's effective and can stay. Add bigger / flashier ads and ad revenue increases, therefore the big and flashy ads can stay.
User enjoyment is a lot harder to measure. You can look at metrics like page visits and session length, but that's still just metrics. Asking the users themselves has two problems, one is lack of engagement (unless you are a big community already, HN doing a survey would get plenty of feedback), two is that the people don't actually know how they feel about a website or what they want (they want faster horses). Like, I don't think anybody asked Google for an AI summary of what they think you're searching for, but they did, and it made people stay on Google instead of go to the site.
Whether that's good for Google in the long run remains to be seen of course, back when Google first rolled out their ad problem it... really didn't matter to them, because their ads were on a lot of webpages. Google's targets ended up becoming "keep the users on the internet, make them browse more and faster", and for a while that pushed innovation too; V8, Chrome, Google DNS, Gears, SPDY/HTTP/2/3, Lighthouse, mod_pagespeed, Google Closure Compiler, etc etc etc - all invented to make the web faster, because faster web = more pageviews = more ad impressions = more revenue.
Of course, part of that benefited others; Facebook for example created their own ecosystem, the internet within the internet. But anyway.
Contradicting someone describing their own experience based on assumptions and generalizations that may or may not have a basis in reality is pretty arrogant. How are you so confident that you can presume to tell that person what’s going on in their mind?
More generally speaking though, I do agree that comments probably tend to give people more of a dopamine hit than the content itself, especially if it’s long-form. However comments on HN often are quite substantial and of high quality, at least relatively speaking, and the earlier point about reading the articles often being a poor experience has a lot of merit as well. Why can’t it be a combination of all of the above (to various degrees depending on the individual, etc)?
The majority of the linked articles is waaayyyyy too long for what they have to say, and they reveal the subject only many paragraphs in.
From reading one or a few short comments I at least know what the linked article is about, which the original headline often does not reveal (no fault of those authors, their blogs are often specialized and anyone finding the article there has much more context compared to finding the same headline here on a general aggregation site).
I do the same thing - Instead of going first to an unknown site that might (will?) be ad-infested and possibly AI generated, so that a phrase becomes a 1000-word article, I read the comments on HN, decide if it's interesting enough to take the risk, and then click. If it's Medium or similar, I won't click.
Hey, coming out feels good - I thought I was the only one.
Here is the experience when clicking a link on mobile:
* Page loads, immediately when I start scrolling and reading a popup trying to get tracking consent
* If I am lucky, there is a "necessary only". When unlucky I need to click "manage options" and first see how to reject all tracking
* There is a sticky banner on top/bottom taking 20-30% of my screen upselling me a subscription or asking me to install their app. Upon pressing the tiny X in the corner it takes 1-2 seconds to close or multiple presses as I am either missing the x or because there is a network roundtrip
* I scroll down a screen and get a popup overlay asking me to signup for their service or newsleter, again messing with the x to close
* video or other flashy adds in the content keep bugging me
This is btw. usually all before I even established if the content is what I was looking for, or is at any way useful to me (often it is not).
If you use AI or Kagi summarizr, you get ad-free, well-formatted content without any annoyance.
Yes, this is the experience on virtually every content website that used to be tolerable or even good.
But this is because there is no viable monetization model for non-editorial written word content anymore and hasn’t been for a decade. Google killed the ecosystem they helped create.
Google also killed the display ad market by monopolizing it with Adsense and then killed Adsense revenue sharing with creators to take all the money for themselves by turning their 10 blue links into 5 blue ads at the top of the search results. Search ads is now the most profitable monopoly business of all time.
YouTube is still young, but give it time. Google will eventually kill the golden goose there as well, by trying to harvest too many eggs for themselves.
The same will happen with AI results as well. Companies will be happy to lose money on it for a decade while they fight for dominance. But eventually the call for profits will come and the AI results will require scrolling through mountains of ads to see an answer.
This is the shape of this market. Search driven content in any form is and will always be a yellow pages business. Doesn’t matter if it’s on paper or some future AGI.
YouTube is 20 years old now. Either the encrapification is very slow or they landed on a decent ad model.
Plus there is a subscription that eliminates ads. I think it’s a great experience for users. Many creators also seem to do well too.
I think this should be the model for a new generation of search. Obviously there will be ads/sponsored results. But there should be a subscription option to eliminate the ads.
The key part here will be monetization for content creators. People are no longer clicking links, so how do they get revenue?
I think direct payments from AI companies to content creators will be necessary or the whole internet will implode.
I spend noticeably less time on youtube than I used to because they keep shoving shorts in my face. I'm a premium subscriber, I click "fewer shorts," nothing changes. Maybe I should be thankful?
AI models will continue to improve, but open source models are, right now, good enough for plenty of tasks.
If I'm searching "how to get an intuitive understanding of dot product and cross product", any open source model right now will do a perfectly fine job. By the time that the ad-pocalypse reaches AI answers, the models I mention will be at the point of being able to be run locally using consumer hardware. Probably every phone will run one.
I suspect in the next decade we will see the business model of "make money via advertising while trying/pretending to provide knowledge" become well and truly dead.
Google also killed the display ad market by monopolizing it with Adsense and then killed Adsense revenue sharing with creators to take all the money for themselves by turning their 10 blue links into 5 blue ads at the top of the search results.
Adsense is just for little hobby websites, no actual businesses use it. They all use header bidding, which is (mostly) not controlled by Google.
Basically, if we are smart Software as Public Infrastructure will take root and basic search and publication will be seen as ordinary government operations, like public parks and national forests.
It's the usual enshittification. First they screw the end luser, then they screw their actual customers. If you depend on one platform as a member of either of those groups, you're screwed.
The only inaccurate thing of that meme page is that you only need to uncheck 5 cookie "partners", when in reality there should be at least a few hundred.
The web page source seems full of Easter eggs and I'm not sure how intentional that is. The generic labels and descriptions of content as "useless" make sense, but then I noticed things like multiple redundant </ul> tags and this script comment:
src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source
crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
which is part of configuration for some minified/obfuscated driver....
Anyway, is it really not even possible to set up things like NoScript and uBlock Origin on mobile?
The site has a few other issues... The ads contrast with the content instead of blending in; there are only 2 ads inline with the content, and one is clearly an easy to ignore banner; all the cookie "partners" could be disabled, there should be 2 or 3 that you can't change.
You forgot the part about when you actually get to the content, there's usually about 5 paragraphs of SEO filler text before it actually gets onto answering the topic of the post.
Let's start at the beginning. I was born in 1956 in Chicago. My mother was a cruel drunk and the only thing my father hated more than his work was his family.
And then the part where you have to create an account to read past the SEO filler :(
It's so sad, cause it drags down good pages. I recently did a lot of research for camping and outdoor gear, and of course I started the journey from Google. But a few sites kept popping up, I really liked their reviews and the quality of the items I got based on that, so I started just going directly to them for comparisons and reviews. This is how it's supposed to work, IMHO.
AI is built by the same companies that built the last generation of hostile technology, and they're currently offering it at a loss. Once they have encrusted themselves in our everyday lives and killed the independent web for good, you can bet they will recoup on their investment.
It's a market where nobody has a particularly deep moat and most players are charging money for a service. Open weight models aren't too far behind proprietary models, particularly for mundane queries. The cost of inference is plummeting and it's already possible to run very good models at pennies per megatoken. I think it's unreasonably pessimistic to assume that dark patterns are an inevitability.
I fail to see how that will work out. Just I have an adblocker now, I could have a very simple local llm in my browser that modifies the search-AIs answer and strips obvious ads.
To combat this, maybe we can cache AI responses for common prompts somehow and make some kind of website where people could search for keywords and find responses that might be related to what they want, so they don’t have to spend tokens on an AI. Could be free.
Application error
An error occurred in the application and your page could not be served. If you are the application owner, check your logs for details. You can do this from the Heroku CLI with the command
heroku logs --tail
They will because that's how things are supposed to work. For example your preference about tracking will get stored for that site. The same as login details. Those are legitimate interests and you never get an option for them.
"legitimate interest" is just weasel words. With some mental gymnastics, you can argue for anything to be legitimate. And you can continue to do so until someone steps up, challenges your claims in a court, and wins the case.
Your AI chat bot is ad free for now. This comment brought to you by PlavaLaguna Ultrasonic Water. Make your next VC pitch higher than you ever thought possible! Consume responsibly
At least there is more credible competition, so there could be a variety of business models to pick - ad-backed or paid. The search engine wars truly ended up being winner-take-all.
That's because AI is still in the honeymoon phase, unless it's a paying service, at some point the summary will start to have context relevant ads.
Also, I felt like in long term that's going to kill off the good faith of all those smaller sites that are actually good, while the bigger ones still produce subpar contents.
I don't know which models they use but it's likely already happening.
Yesterday's SEO battles are today battles to convince LLMs to produce ad tokens. The corpus is already ridden of such content. And LLM make it even easier to produce more such spam.
On desktop I have one of those one-click JS toggle extensions, so the moment I see any attempts to interfere with me reading the goddamn article, that website gets its privilege of client-side interactivity revoked due to abuse.
Some annoyed me so much I even disabled JS for them on my phone. I do that more rarely because of how unnecessarily convoluted that setting is in Chromium browsers on Android. You have to navigate 4 levels deep in the settings and enter the domain you want to block into a text field!
For example, I have JS disabled on everything Substack (and it really annoys me when I end up on Substack hosted on a custom domain).
> "This is btw. usually all before I even established if the content is what I was looking for, or is at any way useful to me (often it is not)."
This is the huge one for me. If you search for something in natural language, the results you get on any search engine completely suck - yet ironically the AI overview is generally spot on. Search engines have been stuck in ~2003 for decades. Now the next 'breakthrough' is to use their LLMs to actually link to relevant content instead of using pagerank+ or whatever dysfunctional SEO'd algorithm they're still using.
The law is good, but websites implement it badly on purpose to inflict consumer ire towards the EU. There's good money to be made if they manage to make the voting public hate the cookie banners so much the anti-tracking legislation gets repelled
What you describe is subtly different from what is in the article.
The article is about Google (and other traditional search engines) snatching away clicks from web site owners. What you describe is AI tools (for lack of a better word[1]) snatching away traffic from the ruling gatekeepers of the web.
I think the latter is a much bigger shift and might well be the end of Google.
By extension it will be the end of SEO as we know it. A lot of discussion currently (especially on HN) is about how to keep the bad crawlers out and in general hide from the oh so bad AI guys. That is not unlike the early days of search engines.
I predict we will soon see a phase where this switches by 180° and everyone will see a fight to be the first one to be accessed to get an opportunity to gaslight the agent into their view of the world. A new three letter acronym will be coined, like AIO or something and we will see a shift from textual content to assets the AI tools can only link to.
Maybe this has already happened to some degree.
[1] Where would I put the boundary? Is Kagi the former or the latter? I'd say if a tool does a non-predetermined number of independent activities (like searches) on its own and only stops if some criteria are fulfilled it is clearly in the latter category.
You're spot on. That shift you're describing isn't a prediction anymore, it's already happening.
The term you're looking for is GEO (Generative Engine Optimization), though your "AIO" is also used. It's the new frontier.
And you've nailed the 180° turn: the game is no longer about blocking crawlers but about a race to become their primary source. The goal is to be the one to "gaslight the agent" into adopting your view of the world. This is achieved not through old SEO tricks, but by creating highly structured, authoritative content that is easy for an LLM to cite.
Your point about shifting to "assets the AI tools can only link to" is the other key piece. As AI summarization becomes the norm, the value is in creating things that can't be summarized away: proprietary data, interactive tools, and unique video content. The goal is to become the necessary destination that the AI must point to.
The end of SEO as we know it is here. The fight for visibility has just moved up a layer of abstraction.
> I predict we will soon see a phase where this switches by 180° and everyone will see a fight to be the first one to be accessed to get an opportunity to gaslight the agent into their view of the world. A new three letter acronym will be coined, like AIO or something and we will see a shift from textual content to assets the AI tools can only link to.
i can definitely see LLMs companies offering content creators a bump in training priority for a fee. It will be like ad-sales but you're paying for the LLM to consider your content at a higher priority than your competition.
I use Firefox with standard security options and the uBlock Origin Add-On on my Android phone and I virtually never see what you describe, bar the tracking consent nag screen ofc. Maybe we visit vastly different web content?
I guess if my experience was as much degraded as yours I wouldn't bother with the web anymore, so yay for AI summarizers, at least for the time being. And don't get me wrong, a summarizer is a workaround, not a solution.
There is an extension called "I still don't care about cookies" that mostly solves the nag screens (There's also a similar one that doesn't have the "still" in its name but that one was bought by an ad company and enshittified.) AFAIU it usually accepts the cookies though, so you should combine it with something that clears your cookies periodically.
Sometimes it breaks the site so that you can't scroll or something, but that's quite rare. And most of the time it's solved by a refresh. Very infrequently you need to whitelist the site and then deal with the nag screen manually. A bit annoying, but way better than rawdogging it.
also gotta have every click on the page to highlight text navigate to a shopping cart subscription page and then break the back button.
Clicking on a video to mute it also needs to navigate to a sponsor’s page and break the back button. And then the page reloads which doubles the page view count. Genius web dev decision. I bet they said “there’s literally no downsides to doing this!”
Also, the ads need to autoplay on full volume, often bypassing my system volume somehow so they can play even though the rest of the audio is on mute and none of the mute functionality works. Surely the user simply forgot they had mute on so we should just go ahead and fix that.
They also need to play on 4K ultra HD to use my entire monthly cell plan if I don’t stop it in the first 3 seconds, which I can’t do because the video has to fully load before I’m able to interact with it to click stop. Or clicking stop pauses it and then automatically restarts playing the video.
These webdev chrome devs need to stop adding new random features and start fixing the basic functionality. I don’t want fading rotating banners that save 3 lines of CSS. I want the “DO NOT AUTOPLAY. EVER.” Button to actually work.
Ugh this just makes me wonder how long it will be before we start seeing responses to AI chat like "please watch this 30s ad / drink a verification can to get your answer". I have to believe that ads are coming.
I have been leaning more and more on Marginalia Search to avoid the type of webpages you are describing. The filters centered on page technologies seem to weed out much that is wrong with the modern style-over-substance web, IMHO.
I'm actually rolling out changes as we speak that should make nuisance identification even better, and will result in throwing out fewer babies with the bathwater.
Pretty accurate, the web is generally unpleasant at the moment especially using a search engine as your entry point. The first page of results are irrelevant paid ads.
My web experience has been reduced to a handful of bookmarks, X, and chatgpt or grok. Occasionally I’ll go looking for government sites to validate something I read on X. Everything else is noise
There were solutions like Google Web Lite, AMP HTML or Facebook Instant Articles, but sadly they are mostly gone. There is still reader mode in some web browsers (e.g. Speedreader in Brave) which helps a lot. And of course uBlock Origin (Lite) is a must.
I know it won't fix the core issue, but you can try (at least on android) Firefox with uBlock Origin (with filter lists for cookies and annoyances enabled). It makes the web usable on mobile for me.
My big hope is that somehow magically we avoid bringing this experience back to AI summaries and chats. Realistically, though, I will be on the lookout for the next generation of uBlock, NextDNS and the like.
Also people are just lazy and will choose the path of least resistance. I'll bet that Wikipedia and other websites are affected and don't fit in that list of legitimate grievances.
NoScript removes almost all of that, at the insignificant cost of sometimes having to add some (usually temporary) exceptions to run scripts from a few domains.
Clay tablets and library books have no ads either. NoScript is not the solution to the web being full of AI-generated SEO crap. It’s a bandaid over the real problem.
Sometimes they hack the back navigation function and present their own clone of Google Discover feed. If you are not careful you might end up in a different feed.
I mean you're not wrong try searching for any recipe or just a search result where you want a simple answer. This problem you're outlining isn't just the search engines/ai/results fault. Simple questions should have answers in paragraphs of dialogue and anymore than 1 ad.
My "favorite" Google dark-pattern, for which the dreamy kid in me hopes they get fucking sued to oblivion for how offensive it is[1]:
1. Open safari
2. Type something so that it goes search google
3. A web results page appears
4. Immediately a popup appears with two buttons:
- They have the same size
- One is highlighted in blue and it says CONTINUE
- The other is faint and reads "Stay in browser" (but in my native language the distinction is even less clear)
5. Clicking CONTINUE means "CONTINUE in the app", so it takes me to the Google App (or, actually, to the app store, because I don't have this app), but this does not end there!
6. If I go back to the browser to try to fucking use google on my fucking browser, as I fucking wanted to, I realize that doing "Back" now constantly moves me to the app (or app store). So, in effect, I can never get the search results once I have clicked continue. The back button has been highjacked (long pressing does not help). My only option is to NEVER click continue
7. Bonus: All of this happens regardless of my iPhone having the google app installed or not
So: Big button that says "CONTINUE" does not "CONTINUE" this action (it, of course, "CONTINUES" outside).
I just want to FUCKING BROWSE THE WEB. If I use the google app, then clicking a link presumably either keeps me in its specific view of the web (outside of my browser), or it takes me out of the app. This is not the experience I want. I have a BROWSER for a reason (e.g. shared groups/tabs...)
Oh! And since this happens even if I don't have the app, it takes me to the app store. If I install the app via the app store, it then DOES NOT have any mechanism to actually "Continue". It's a fresh install. And, of course, if I go back to the browser and hit "back", I can't.
So for users who DO NOT HAVE THE APP, this will NEVER LET THEM CONTINUE. It will PREVENT THEM FROM USING GOOGLE. And it will force them to do their query AGAIN.
Did the people who work on this feature simply give up? What. The. Fuck?
This behavior seems to happen on-and-off, as if google is gaslighting me. Sometimes it happens every time I open Safari. Some other times it goes for days without appearing. Sometimes in anonymous tabs, sometimes not. Logged in or not, I've seen both scenarios.
I can't be sure, but I genuinely believe that the order of the buttons has been swapped, messing with my muscle memory.
Except a still image cannot describe the excruciating process of dealing with it — especially realizing "oh, wait, I clicked the wrong button, oh wait, no no no, get out of the app store, oh oh oh what did I type again? Damn I lost it all!..."
[1]I would quit before implementing this feature. It disgusts me, and we're talking about google, not some run-of-the-mill company whom you have to work for to barely survive. This is absolutely shameful.
I've written high-quality technical how-tos for many years, starting with PC World magazine articles (supported by ads), a book that helped people learn Ruby on Rails (sales via Amazon), and more recently a website that's good for queries like "uninstall Homebrew" or "xcode command line tools" (sponsored by a carefully chosen advertiser). With both a (small) financial incentive and the intrinsic satisfaction of doing good work that people appreciate, I know I've helped a LOT of people over four decades.
A year ago my ad-supported website had 100,000 monthly active users. Now, like the article says, traffic is down 40% thanks to Google AI Overview zero clicks. There's loss of revenue, yes, but apart from that, I'm wondering how people can find my work, if I produce more? They seldom click through on the "source" attributes, if any.
I wonder, am I standing at the gates of hell in a line that includes Tower Records and Blockbuster? Arguably because I'm among those that built this dystopia with ever-so-helpful technical content.
> am I standing at the gates of hell in a line that includes Tower Records and Blockbuster?
Maybe, but there’s a big difference - Netflix doesn’t rely on Blockbuster, and Spotify doesn’t need Tower Records. Google AI results do need your articles, and it returns the content of them to your readers without sending you the traffic. And Google is just trying to fend off ChatGPT and Meta and others, who absolutely will, if allowed, try to use their AI to become the new search gateways and supplant Google entirely.
This race will continue as long as Google & OpenAI & everyone else gets to train on your articles without paying anything for them. Hopefully in the future, AI training will either be fully curated and trained on material that’s legal to use, or it will license and pay for the material they want that’s not otherwise free. TBH I’m surprised the copyright backlash hasn’t been much, much bigger. Ideally the lost traffic you’re seeing is back-filled with licensing income.
I guess you can rest a little easier since we got to where we are now not primarily because of technical means but mostly by allowing mass copyright violation. And maybe it helps a little to know that most content-producing jobs in the world are in the same boat you are, including the programmers in your target audience. That’s cold comfort, but OTOH the problem you (we) face is far more likely to be addressed and fixed than if it was only a few people affected.
We are heading for an internet Kessler syndrome, where the destruction of human-written text will cause LLMs to train off of dirty LLM-written text, causing the further destruction of human-written text and the further degradation of LLM-written text. Eventually LLMs will be useless and human-written text will not be discoverable. I pray that the answer is that people seek out spaces which are not monetized (such as the gemini protocol) so that there's no economic incentive to waste computing resources on it.
> TBH I’m surprised the copyright backlash hasn’t been much, much bigger.
Even when you have them dead to rights (like with the Whisper hallucinations) the legal argument is hard to make. Besides, the defendants have unfathomable resources.
The recent taking of people's content for AI training might be the most blatant example of rich well connected people having different rules in our society that I've ever witnessed. If a random person copied mass amounts of IP and resold it in a different product with zero attribution or compensation, and that product directly undercut the business of those same IP producers, they would be thrown in jail. Normal people get treated as criminals for seeding a few movies, but the Sam Altmans of the world can break those laws on an unprecedented scale with no repercussions.
As sad as it is, I think we're looking at the end of the open internet as we've known it. This is massive tragedy of the commons situation and there seems to be roughly zero political will to enact needed regulations to keep things fair and sustainable. The costs of this trend are massive, but they are spread out across many millions of disparate producers and consumers, while the gains are extremely concentrated in the hands of the few; and those few have good lobbyists.
I mean, there's always been a grey area even when it came to tiny snippets in the results, though those actually encouraged you to click through when you found the right result.
The beginning of the end was including Wikipedia entries directly in the search results, although arguably even some of the image results are high quality enough to warrant skipping visiting the actual website (if you were lucky enough to get the image at the target site in the first place) So maybe it goes back sooner than that.
It does speak to one of the core problems with AI is the one time productivity boost from using all historical data created by humans is no longer going to be as useful going forward since individual contributors will no longer build and provide that information unless the incentive models change.
Most of your monthly active users don't want to read your articles. They want to get their questions answered with as little effort as possible. This is what Google's Overview is doing: it's transforming your articles into a form-factor that the users want. This is what you could be doing as well: rather than creating food for AI, create a mini-AI yourself that answers user questions. It doesn't have to fabricate answers, rather it can quote your memos in a format tailored to users, while your memos will remain private. This will also stonewall Google's AI, for now it would have to interrogate your mini-AI.
Unfortunately for you this kind of content does seem to be going the way of Blockbuster. But the writing was on the wall for years now with how much Google Search became useless due to over-SEOification of every website, LLMs were just the dagger
The pattern repeats itself. Come, use our services, it's free, it will be good for you. Users elevate the service to a monopoly. And then the behemoth thinks that the users - who gave their blood so the behemoth could grow - are now more like a nuisance and kills those that are the most vulnerable.
Every year they put the threshold higher and it results in more and more people getting burned. Of course the big, established brands are protected.
So they don't want the average joe's opinion. And they don't want to funnel money to you, now that you have fulfilled your purpose.
What you describe is more like the traditional VC structure for most businesses. Provide low cost services while they need to grow user base -- as user base is established the model now needs to extract value from the users - quality drops and the costs increases.
It happens for all VC based products since the drive on returns of invested capital is so high.
Put another way -- early stage products that every uses and love (in most not all cases) should not be assumed to be the end product.
Novel content will continue to require human creators. So, if you are at the frontier of some idea space, whether that’s using Homebrew or baking brownies, your input will be rewarded to some extent. But, we won’t need 1000 different Medium blogs about installing Rails or 1000 baking websites pitching the same recipe but with a different family story at the top.
Yes, maybe a small amount of people ultimately contributing but if their input is truly novel and “true” then what’s the downside?
It will just be different. No profit train lasts forever. Google is about to be made utterly irrelevant after 20+ years or so as a company. And they were the best.
If you still have a connection to your readers (e.g. email) you can still reach them. If they've formed a community, even better. If not, its a good time to work on that.
Google doesn't really have that. I have zero sense of community with Google. And that's why they'll die if something doesn't change.
> and more recently a website that's good for queries like "uninstall Homebrew" or "xcode command line tools" (sponsored by a carefully chosen advertiser). With both a (small) financial incentive and the intrinsic satisfaction of doing good work that people appreciate, I know I've helped a LOT of people over four decades.
Simple content that can be conveyed in a few succinct lines of text (like how to uninstall Homebrew) is actually one of the great use cases for AI summaries.
I’m sorry that it’s losing you revenue, but I’d much rather get a quick answer from AI than have to roll the dice on an ad-supported search result where I have to parse the layout, dodge the ads, and extract the relevant info from the filler content and verbiage
I mean, then what happens when there isn't enough money in producing answers but technology continues to move forward? There isn't any more content for the AI to summarize to answer with...
Just a question how content is produced & ingested.
Utopian fantasy: interact with the ai - novel findings are registered as such and "saved" and made available to others.
Creative ideas are registered as such, if possible, theyre tested in "side quests" ie the ai asks - do you have 5min to try this? You unblock yourself if it works & see in the future how many others profited as well (3k people read this finding).
A lot of the comments here are along the lines of "websites are often hostile, and AI summaries are a better user experience" which I agree with for most cases. I think the main thing to be worried about is that this model is undermining the fundamental economic model the internet's currently based on.
If I create content like recipes, journalism etc, previously I had exclusive rights to my created content and could monetise it however I wanted. This has mostly led to what we have today, some high quality content, lots of low quality content, mostly monetised through user hostile ads.
Previously, if I wanted to take a recipe from "strawberry-recipes.cool" and published it on my own website with a better user experience, that wouldn't have been allowed because of copyright rules. I still can't do that, but Google can if it's done through the mechanism of AI summaries.
I think the worst case scenario is that people stop publishing content on the web altogether. The most likely one is that search/summary engines eat up money that previously came from content creators. The best one is that we find some alternative, third way, for creators to monotise content while maintaining discoverability.
I'm not sure what will happen, and I'm not denying the usefulness of AI summaries, but it feels easy to miss that, at their core, they're a fundamental reworking of the current economics of the internet.
> I think the main thing to be worried about is that this model is undermining the fundamental economic model the internet's currently based on.
This would be lovely.
> I think the worst case scenario is that people stop publishing content on the web altogether. The most likely one is that search/summary engines eat up money that previously came from content creators.
More than likely, people return to publishing content because they love the subject matter and not because it is an angle to “create content” or “gain followers” or show ads. No more “the top 25 hats in July 2025” AI slopfest SEO articles when I look for a hat, but a thoughtful series of reviews with no ads or affiliate links, just because someone is passionate about hats. The horror! The horror!
>More than likely, people return to publishing content because they love the subject matter and not because it is an angle to “create content” or “gain followers” or show ads.
Why would you do that if you thought it was going to be hoovered up by some giant corporation and spat out again for $20 a month with no attribution.
And Google would still use AI to get money by using that content without having to access your website. Besides that, creating content IS work for a lot of people. Ads and affiliated links are part of the monetization model that works the best, sadly. What you are saying is "people should just code for fun and curiosity, their income should come from elsewhere" while Google is making money with Gemini. It's not necessarily wrong, but it sounds dismissive.
I agree the current model sucks, but I think it being replaced is only good if it's replaced with something better.
> More than likely, people return to publishing content because they love the subject matter
I'd love the idea of people doing things because they're passionate, but I feel a little unsure about people doing things because they're passionate, generating money from those things, but all that money going to AI summariser companies. I think there's some pretty serious limits too, journalists risk their safety a lot of the time, and I can't see a world where that happens purely out of "passion" without any renumeration. Aside from anything else, some acts of journalism like overseas reporting etc, isn't compatible with working a seperate "for-pay" job.
It's not going to happen this way, because these days for you to get somewhere near the top of Google results requires you to be an established content publisher, basically anyone with enough followers.
Someone who publishes content because they love the subject matter would only reach enough of an audience to have an impact if they work on it, a lot, and most people wouldn't do that without some expectation of return on investment, so they'd follow the influencer / commercial publication playbook and end up in the same place as the established players in the space are already.
If you're satisfied of being on the 50th page on the Google results, then that's fine. Nobody will find you though.
Being passionate about hats is one thing, but being passionate about sharing something you care about with others is the real driver for publishing. As LLMs degrade web discoverability through search (summaries+slop results), there's no incentive for the latter people to continue publishing on the open web or even the bot-infested closed gardens.
The web is on a trajectory where a local dyi zine will reach as many readers as an open website. It might even be cheaper than paying for a domain+hosting once that industry contracts and hosting plans aren't robust enough to keep up with requests from vibe-coded scrapers.
> More than likely, people return to publishing content because they love the subject matter and not because it is an angle to “create content” or “gain followers” or show ads. No more “the top 25 hats in July 2025” AI slopfest SEO articles when I look for a hat, but a thoughtful series of reviews with no ads or affiliate links, just because someone is passionate about hats. The horror! The horror!
I disagree with that. There are still people out there doing that out of passion, that hasn't changed (it's just harder to find). Bad actors who are only out there for the money will continue trying to get the money. Blogs might not be relevant anymore, but social media influencing is still going to be a thing. SEO will continue to exist, but now it's targeted to influence AIs instead of the position in Google search results. AIs will need to become (more) profitable, which means they will include advertising at some point. Instead of companies paying Google to place their products in the search or influencers through affiliate links, they will just pay AI companies to place their products in AI results or influencers to create fake reviews trying to influence the AI bots. A SEO slop article is at least easy to detect, recommendations from AIs are much harder to verify.
Also it's going to hit journalism. Not everyone can just blog because they are passionate about something. Any content produced by professionals is either going to be paywalled even more or they need to find different sources of income threatening journalistic integrity. And that gives even more ways to bad actors with money to publish news in their interest for free and gaining more influence on the public debate.
It's crazy how few people see it that way. Big tech is capturing all the value created by content creators, and it's slowly strangling the independent web it feeds on. It's a parasitic relationship. Once the parasite has killed its host, it will feed on its users.
>If I create content like recipes ... previously I had exclusive rights to my created content
Recipes are not protected by copyright law. That's _why_ recipe bloggers have resorted to editorialising recipes, because the editorial content is copyrightable.
Haha, you've exposed that I know absolutely nothing about copyright law! That's a great point, but I think my original point still stands if you swap out my full-of-holes example for a type of content that is copyrightable.
> I think the worst case scenario is that people stop publishing content on the web altogether
Quite clearly heading in that direction, but with a twist: the only people left will be advertising or propaganda, if there's no money in authenticity or correctness.
There was little to no money in authenticity or correctness in the heyday of home pages and personal blogs. People published because they were excited about sharing information and opinions. That was arguably the internet at its best.
> I think the main thing to be worried about is that this model is undermining the fundamental economic model the internet's currently based on.
And this is the reason why Google took its sweet time to counter OpenAI's GPT3. They _had_ to come up with this, which admittedly disrupts the publishers business model but at least if Google is successful they will keep their moat as the first step in any sales funnel.
> Previously, if I wanted to take a recipe from "strawberry-recipes.cool" and published it on my own website with a better user experience, that wouldn't have been allowed because of copyright rules
This is not true, you absolutely could have republished a recipe with your own wording and user experience.
At some stage Google will need to be accountable for answers they are hosting on their own site. The argument of "we're only indexing info on other sites" changes when you are building a tool to generate content and hosting that content on your own domain.
I'm guilty of not clicking when I'm satisfied with the AI answer. I know it can be wrong. I've seen it be wrong multiple times. But it's right at the top and tells me what I suspected when I did the search. The way they position the AI overview is right in your face.
I would prefer the "AI overview" to be replaced with something that helps me better search rather than giving me the answer directly.
>But it's right at the top and tells me what I suspected when I did the search. The way they position the AI overview is right in your face.
Which also introduces the insidious possibility that AI summaries will be designed to confirm biases. People already use AI chat logs to prove stuff, which is insane, but it works on some folks.
> The argument of "we're only indexing info on other sites" changes when you are building a tool to generate content and hosting that content on your own domain.
And yet, "the algorithm" has always been their first defense whenever they got a complaint or lawsuit about search results; I suspect that when (not if) they get sued over this, they will do the same. Treating their algorithms and systems as a mysterious, somewhat magic black box.
You can opt-in to get an LLM response by phrasing your queries as a question.
Searching for “who is Roger rabbit” gives me Wikipedia, IMDb and film site as results.
Searching for “who is Roger rabbit?” gives me a “quick answer” LLM-generated response: “Roger Rabbit is a fictional animated anthropomorphic rabbit who first appeared in Gary K. Wolf's 1981 novel…” followed by a different set of results. It seems the results are influenced by the sources/references the LLM generated.
I'm more interested now than ever. A lot of my time spent searching is for obscure or hard-to-find stuff, and in the past smaller search engines were useless for this. But most of my searches are quick and the primary thing slowing me down are Google product managers. So maybe Kagi is worth a try?
However, it's pretty bad for local results and shopping. I find that anytime I need to know a local stores hours or find the cheapest place to purchase an item I need to pivot back to google. Other than that it's become my default for most things.
Thanks for the suggestion. I try nonstandard search engines now and then and maybe this one will stick. Google certainly is trying their best to encourage me.
After about a year on Kagi my work browser randomly reverted to Google. I didn’t notice the page title, as my eyes go right to the results. I recoiled. 0 organic results without scrolling, just ads and sponsored links everywhere. It seems like Google boiled the frog one degree at a time. Everyone is in hell and just doesn’t know it, because it happened so gradually.
I’ve also tried various engines over the years. Kagi was the first one that didn’t have me needing to go back to Google. I regularly find things that people using Google seem to not find. The Assistant has solved enough of my AI needs that I don’t bother subscribing to any dedicated AI company. I don’t miss Google search at all.
I do still using Google Maps, as its business data still seems like the best out there, and second place isn’t even close. Kagi is working on their own maps, but that will be long road. I’m still waiting for Apple to really go all-in, instead of leaning on Yelp.
Apple really needs to update Safari to let people choose their search engine, instead of just having the list of blessed search engines to choose from.
Conversely, it's useful to get an immediate answer sometimes
6 months ago, "what temp is pork safe at?" was a few clicks, long SEO optimised blog post answers and usually all in F not C ... despite Google knowing location ... I used it as an example at the time of 'how hard can this be?'
First sentance of Google AI response right now: "Pork is safe to eat when cooked to an internal temperature of 145°F (63°C)"
Dear lord please don’t use an AI overview answer for food safety.
If you made a bet with your friend and are using the AI overview to settle it, fine. But please please click on an actual result from a trusted source if you’re deciding what temperature to cook meat to
The problem is that SEO has made it hard to find trustworthy sites in the first place. The places I trust the most now for getting random information is Reddit and Wikipedia, which is absolutely ridiculous as they are terrible options.
But SEO slop machines have made it so hard to find the good websites without putting in more legwork than makes sense a lot of the time. Funnily enough, this makes AI look like a good option to cut through all the noise despite its hallucinations. That's obviously not acceptable when it comes to food safety concerns though.
Mmm, I see this cutting both ways -- generally, I'd agree; safety critical things should not be left to an AI. However, cooking temperatures are information that has a factual ground truth (or at least one that has been decided on), has VERY broad distribution on the internet, and generally is a single, short "kernel" of information that has become subject to slop-ifying and "here's an article when you're looking for about 30 characters of information or less" that is prolific on the web.
So, I'd agree -- safety info from an LLM is bad. But generally, the /flavor/ (heh) of information that such data comprises is REALLY good to get from LLMs (as opposed to nuanced opinions or subjective feedback).
Idk. Maybe that's true today (though even today I'm not sure) but how long before AI becomes better than just finding random text on a website?
After all, AI can theoretically ask follow-up questions that are relevant, can explain subtleties peculiar to a specific situation or request, can rephrase things in ways that are clearer for the end user.
Btw, "What temperature should a food be cooked to" is a classic example of something where lots of people and lots of sources repeat incorrect information, which is often ignored by people who actually cook. Famously, the temp that is often "recommended" is only the temp at which bacteria/whatever is killed instantly - but is often too hot to make the food taste good. What is normally recommended is to cook to a lower temperature but keep the food at that temperature for a bit longer, which has the same effect safety-wise but is much better.
When I searched for the safe temperature for pork (in German), I found this as the first link (Kagi search engine)
> Ideally, pork should taste pink, with a core temperature between 58 and 59 degrees Celsius. You can determine the exact temperature using a meat thermometer.
Is that not a health concern?
Not anymore, as nutrition expert Dagmar von Cramm confirms:
> “Trichinae inspection in Germany is so strict — even for wild boars — that there is no longer any danger.”
I was just thinking that EU sources might be a good place to look for this sort of thing, given that we never really know what basic public health facts will be deemed political in the US on any given day. But, this reveals a bit of a problem—of course, you guys have food safety standards, so advice they is safe over there might not be applicable in the US.
Funny story, I used that to know the cooked temperature of burgers, it said medium-rare was 130. I proceeded to eating it and all, but then like half way through, I noticed the middle of this burger is really red looking, doesn't seem normal, and suddenly I remembered, wait, ground beef is always supposed to be 160, 130 medium-rare is for steak.
I then chatted that back to it, and it was like, oh ya, I made a mistake, you're right, sorry.
Anyways, luckily I did not get sick.
Moral of the story, don't get mentally lazy and use AI to save you the brain it takes for simple answers.
Do you actually put a thermometer in your burgers/steaks/meat when you’re cooking? That seems really weird.
Why are people downvoting this? I’ve literally never seen anyone use a thermometer to cook a burger or steak or pork chop. A whole roasted turkey, sure.
Why would you purchase meat that you suspect is diseased? Even if you cook it well-done, all the (now dead) bacteria and their byproducts are still inside. I don't understand why people do this to themselves? If I have any suspicion about some meat, I'll throw it away. I'm not going to cook it.
Most of the time if there isn't a straightforward primary source in the top results, Google's AI overview won't get it right either.
Given the enormous scale and latency constraints they're dealing with, they're not using SOTA models, and they're probably not feeding the model 5000 words worth of context from every result on the page.
Not only that, it includes a link to the USDA reference so you can verify it yourself. I have switched back to google because of how useful I find the RAG overviews.
The link is the only useful part, since you can’t trust the summary.
Maybe they could just show the links that match your query and skip the overview. Sounds like a billion-dollar startup idea, wonder why nobody’s done it.
As of a couple weeks ago it had a variety of unsafe food recommendations regarding sous vide, e.g. suggesting 129F for 4+ hours for venison backstrap. That works great some of the time but has a very real risk of bacterial infiltration (133F being similar in texture and much safer, or 2hr being a safer cook time if you want to stick to 129F).
No it wasn't, most of the first page results have the temperature right there in the summary, many of them with both F and C, and unlike the AI response, there is much lower chance of hallucinated results.
So you've gained nothing
PS
Trying the same search with -ai gets you the full table with temperatures, unlike with the AI summary where you have to click to get more details, so the new AI summary is strictly worse
Honestly the SEO talk sounds like reflexive coping in this discourse. I get that WWW has cheapened quality, but we now have the tech that could defeat most of the SEO and other trash tactics on the search engine side. Text analysis as a task is cracked open. Google and such could detect dark patterns with LLMs, or even just deep learning. This would probably be more reliable than answering factual queries.
The problem is there is no money and fame in using it that way, or at least so people think in the current moment. But we could return to enforcing some sort of clear, pro-reader writing and bury the 2010s-2020s SEO garbage on page 30.
Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here.
> But we could return to enforcing some sort of clear, pro-reader writing and bury the 2010s-2020s SEO garbage on page 30.
We could.
But it will absolutely not happen unless and until it can be more profitable than Google's current model.
What's your plan?
> Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here.
Well, yes. That's the problem. Why rely on the same random liars as taste-makers?
Why do you think that answer is correct? I mean maybe it is, or maybe it’s by the same user who recommended eating rocks (which ‘AI’ also recommended).
It doesn't take long to find SEO slop trying to sell you something:
When our grandmothers and grandfathers were growing up, there was a real threat to their health that we don’t face anymore. No, I’m not talking about the lack of antibiotics, nor the scarcity of nutritious food. It was trichinosis, a parasitic disease that used to be caught from undercooked pork.
The legitimate worry of trichinosis led their mothers to cook their pork until it was very well done. They learned to cook it that way and passed that cooking knowledge down to their offspring, and so on down to us. The result? We’ve all eaten a lot of too-dry, overcooked pork.
But hark! The danger is, for the most part, past, and we can all enjoy our pork as the succulent meat it was always intended to be. With proper temperature control, we can have better pork than our ancestors ever dreamed of. Here, we’ll look at a more nuanced way of thinking about pork temperatures than you’ve likely encountered before."
Sorry, what temperature was it again?
Luckily there's the National Pork Board which has bought its way to the top, just below the AI overview. So this time around I won't die from undercooked pork at least.
Incredible, you are the problem. Didn't think I'd see such an idiotic answer on HN, please for the love of god do not use AI to know what is safe to eat.
id consider that google thinks its good enough for people to base their food safety off of it, and they deserve to get sued for whatever theyre worth for providing said recommendations when somebody trusts them and gets sick
Google AI has been listing incorrect internal extensions causing departments to field calls for people trying to reach unrelated divisions and services, listing times and dates of events that don't exist at our addresses that people are showing up to, and generally misdirecting and misguiding people who really need correct information from a truth source like our websites.
We have to track each and every one of these problems down, investigate and evaluate whether we can reproduce them, give them a "thumbs down" to then be able to submit "feedback", with no assurance it will be fixed in a timely manner and no obvious way to opt ourselves out of it entirely. For something beyond our consent and control.
It's worse than when Google and Yelp would create unofficial business profiles on your behalf and then held them hostage until you registered with their services to change them.
Google's AI overview not only ignores this geographic detail, it ignores the high-quality NHS care delivery websites, and presents you with stuff from US sites like Mayo Clinic. Mayo Clinic is a great resource, if you live in the USA, but US medical advice is wildly different to the UK.
[1] https://www.nhs.uk [2] https://www.nhsinform.scot
Weird because although I dislike what Google Search has become as much as any other HNer, one thing that mostly does work well is localised content. Since I live in a small country next to a big country that speaks the same language, it's quite noticeable to me that Google goes to great lengths to find the actually relevant content for my searches when applicable... of course it's not always what I'm actually looking for, because I'm actually a citizen of the other country that I'm not living in, and it makes it difficult to find answers that are relevant to that country. You can add "cr=countryXX" as a query parameter but I always forget about it.
Anyway I wasn't sure if the LLM results were localised because I never pay attention to them so checked and it works fine, they are localised for me. Searching for "where do I declare my taxes" for example gives the correct question depending on the country my IP is from.
Dead Comment
its not a UK wide one. The home page says "NHS Website for England".
I seem to remember the Scottish one had privacy issues with Google tracking embedded, BTW.
> So, if you've gone into labour somewhere in the remote Highlands and Islands, you'll get different advice than if you lived in Central London, where there's a delivery room within a 30 minute drive
But someone in a remote part of England will get the same advice as someone in central London, and someone in central Edinburgh will get the same advice as someone on a remote island, so it does not really work that way.
> if you live in the USA, but US medical advice is wildly different to the UK.
Human biology is the same, diseases are the same, and the difference in available treatments is not usually all that different. This suggests to me someone's advice is wrong. Of course there are legitimate differences of opinion (the same applies to differences between
- Looking up a hint for the casino room in the game "Blue Prince", the AI summary gave me details of the card games on offer at the "Blue Prince Casino" in the next suburb over. There is no casino there.
- Looking up workers rights during a discussion of something to do with management, it directly contradicted the legislation and official government guidance.
I can't imagine how frustrating it must be for business-owners, or those providing information services to find that their traffic is intercepted and their potential visitors treated to an inaccurate version on the search page.
> "You can't get boiled rice from a clown" is a phrase that plays on expectations and the absurdity of a situation.
> The phrase "never stack rocks with Elvis" is a playful way of expressing skepticism about the act of stacking rocks in natural environments.
> The saying "two dogs can't build an ocean" is a colloquial and humorous way of expressing the futility or impossibility of a grand, unachievable goal or task.
But, they hired the best and brightest of my generation. How’d they screw it up so bad?
Now it's the worlds biggest advertisement company, waging war on Adblockers and pushing dark pattern to users.
They've built a browser monopoly with Chrome and can throw their weight around to literally dictate the open web standards.
The only competition is Mozilla Firefox, which ironically is _also_ controlled by Google, they receive millions annually from them.
At some point, an article about how Google was showing this crap made it to the top of the rankings and they started taking the overview from it rather than the original Quora answer it used before. Somehow it still got it wrong, and just lifted the absurd answer from the article rather than the part where the article says it’s very wrong.
Amusingly, they now refuse to show an AI answer for that particular search.
Dead Comment
A third-party LLM hallucinating something like that though? Hell no. It should be possible to sue for libel.
I have come across the same lack of commonsense from ChatGPT in other contexts. It can be very literal with things such as branded terms vs their common more generic meaning (e.g. with IGCSE and International GCSE - UK exams) which again a knowledgeable human would understand.
You need to wait until they offer it as a paid feature. And they (and other LLM providers) will offer it.
Anecdotally, this happened back in analog days, too.
When I worked in local TV, people would call and scream at us if the show they wanted to see was incorrectly listed in the TV Guide.
Screamers: "It's in the TV Guide!"
Me (like a million times): "We decide what goes on the air, not the TV Guide."
The liability question also extends to defamation. Google is no longer just an arbiter of information. They create information themselves. They cannot simply rely on a 'platform provider' defence anymore.
You're fine, you just lost a few verbal IQ points after fasting for 24 hours and doing blood work.
I guess I'm in the minority of people who click through to the sources to confirm the assertions in the summary. I'm surprised most people trust AI, but maybe only because I'm in some sort of bubble.
Let’s not pretend that some websites aren’t straight up bullshit.
There’s blogs spreading bullshit, wrong info, biased info, content marketing for some product etc.
And lord knows comments are frequently wrong, just look around Hackernews.
I’d bet that LLMs are actually wrong less often than typical search results, because they pull from far greater training data. “Wisdom of the crowds”.
Is that relevant when we already have official truth sources: our websites? That information is ours and subject to change at our sole discretion. Google doesn't get to decide who our extensions are assigned to, what our hours of operation are, or what our business services do.
Our initial impression of AI Overview was positive, as well, until this happened to us.
And bear in mind the timeline. We didn't know that this was happening, and even after we realized there was a trend, we didn't know why. We're in the middle of a softphone transition, so we initially blamed ourselves (and panicked a little when what we saw didn't reflect what we assumed was happening - why would people just suddenly start calling wrong numbers?).
After we began collecting responses from misdirected callers and got a nearly unanimous answer of "Google" (don't be proud of that), I called a meeting with our communications and marketing departments and web team to figure out how we'd log and investigate incidents so we could fix the sources. What they turned up was that the numbers had never been publicly published or associated with any of what Google AI was telling them. This wasn't our fault.
So now we're concerned that bad info is being amplified elsewhere on the web. We even considered pulling back the Google-advertised phone extensions so they forward either to a message that tells them Google AI was wrong and to visit our website, or admit defeat and just forward it where Google says it should go (subject to change at Google's pleasure, obviously). We can't do this for established public facing numbers, though, and disrupt business services.
What a stupid saga, but that's how it works when Google treats the world like its personal QA team. (OT, but bince we're all working for them by generating training data for their models and fixing their global scale products, anyone for Google-sponsored UBI?)
If Google shows bullshit about me on the top of its search, I'm helpless.
(for me read any company, person, etc)
1. Here's the answer (but it's misinformation) 2. Here are some websites that look like they might have the answer
?
No different from Google search results.
This is not strictly logical but I have a feeling I'm not alone.
Often the title sort of explains the whole topic (ie lack of parking in NY, or astronomers found the biggest quasar yet), then folks chirp in with their experiences and insight which are sometimes pretty wild.
Me too. That is why sometimes I take the raw comment thread and paste it into a LLM, the result is a grounded article. It contains a diversity of positions and debunking, but the slop is removed. Social threads + LLMs are an amazing combo, getting the LLM polish + the human grounded perspective.
If I was in the place of reddit or HN I would try to generate lots of socially grounded articles. They would be better than any other publication because they don't have the same conflict of interests.
Or, youre confusing primordial desire to be aligned with perceived peers -- checking what others say, then effortlessly nodding along -- with forming your own judgment.
But I never expected that this would also link back to my tendency to skip an article and just stick to what the top comments of a section have, HN or Reddit.
Deleted Comment
I often click on the HN comments before reading the article because the article I very often nothing more than the headline and I'm more interested in the discussion.
At the same time, I have no issue disagreeing with whatever is the popular stance, there’s almost some catharsis in just speaking the truth along the lines of “What you say might be true in your circumstances and culture, but software isn’t built like that here.”
Regardless, I’d say that there’s nothing wrong with finding likeminded peers either, for example if everyone around you views something like SOLID and DRY as dogma and you think there must be a better, more nuanced way.
Either that, or everyone likes a good tl;dr summary.
This is where it breaks down; why would they shove in MORE ads when their readers are going down? I'm not saying it's a rational decision, of course.
I suspect a big part is metrics-driven development; add an aggressive newsletter popup and newsletter subscriptions increase, therefore it's effective and can stay. Add bigger / flashier ads and ad revenue increases, therefore the big and flashy ads can stay.
User enjoyment is a lot harder to measure. You can look at metrics like page visits and session length, but that's still just metrics. Asking the users themselves has two problems, one is lack of engagement (unless you are a big community already, HN doing a survey would get plenty of feedback), two is that the people don't actually know how they feel about a website or what they want (they want faster horses). Like, I don't think anybody asked Google for an AI summary of what they think you're searching for, but they did, and it made people stay on Google instead of go to the site.
Whether that's good for Google in the long run remains to be seen of course, back when Google first rolled out their ad problem it... really didn't matter to them, because their ads were on a lot of webpages. Google's targets ended up becoming "keep the users on the internet, make them browse more and faster", and for a while that pushed innovation too; V8, Chrome, Google DNS, Gears, SPDY/HTTP/2/3, Lighthouse, mod_pagespeed, Google Closure Compiler, etc etc etc - all invented to make the web faster, because faster web = more pageviews = more ad impressions = more revenue.
Of course, part of that benefited others; Facebook for example created their own ecosystem, the internet within the internet. But anyway.
Doesn't scale, but maybe that's the only way to survive.
It has little to do with overdesign or load times.
What do HN comments and AI Overviews have in common?
- All information went through a bunch of neurons at least once
- We don't know which information was even considered
- Might be completely false but presented with utmost confidence
- ...?
More generally speaking though, I do agree that comments probably tend to give people more of a dopamine hit than the content itself, especially if it’s long-form. However comments on HN often are quite substantial and of high quality, at least relatively speaking, and the earlier point about reading the articles often being a poor experience has a lot of merit as well. Why can’t it be a combination of all of the above (to various degrees depending on the individual, etc)?
From reading one or a few short comments I at least know what the linked article is about, which the original headline often does not reveal (no fault of those authors, their blogs are often specialized and anyone finding the article there has much more context compared to finding the same headline here on a general aggregation site).
I do this on hackernews, and especially on news-sites I check (cleantechnica, electrec, reneweconomy) and I actively shun sites _without_ comments.
Hey, coming out feels good - I thought I was the only one.
Except the other commenters didn't read the article either. Now you're all basically just LLM using the title as a prompt.
https://lite.cnn.com for example.
I'm not a big fan of CNN but this is something I'd like to see more of.
I will need $100M in seed funding.
* Page loads, immediately when I start scrolling and reading a popup trying to get tracking consent
* If I am lucky, there is a "necessary only". When unlucky I need to click "manage options" and first see how to reject all tracking
* There is a sticky banner on top/bottom taking 20-30% of my screen upselling me a subscription or asking me to install their app. Upon pressing the tiny X in the corner it takes 1-2 seconds to close or multiple presses as I am either missing the x or because there is a network roundtrip
* I scroll down a screen and get a popup overlay asking me to signup for their service or newsleter, again messing with the x to close
* video or other flashy adds in the content keep bugging me
This is btw. usually all before I even established if the content is what I was looking for, or is at any way useful to me (often it is not).
If you use AI or Kagi summarizr, you get ad-free, well-formatted content without any annoyance.
But this is because there is no viable monetization model for non-editorial written word content anymore and hasn’t been for a decade. Google killed the ecosystem they helped create.
Google also killed the display ad market by monopolizing it with Adsense and then killed Adsense revenue sharing with creators to take all the money for themselves by turning their 10 blue links into 5 blue ads at the top of the search results. Search ads is now the most profitable monopoly business of all time.
YouTube is still young, but give it time. Google will eventually kill the golden goose there as well, by trying to harvest too many eggs for themselves.
The same will happen with AI results as well. Companies will be happy to lose money on it for a decade while they fight for dominance. But eventually the call for profits will come and the AI results will require scrolling through mountains of ads to see an answer.
This is the shape of this market. Search driven content in any form is and will always be a yellow pages business. Doesn’t matter if it’s on paper or some future AGI.
Plus there is a subscription that eliminates ads. I think it’s a great experience for users. Many creators also seem to do well too.
I think this should be the model for a new generation of search. Obviously there will be ads/sponsored results. But there should be a subscription option to eliminate the ads.
The key part here will be monetization for content creators. People are no longer clicking links, so how do they get revenue?
I think direct payments from AI companies to content creators will be necessary or the whole internet will implode.
If I'm searching "how to get an intuitive understanding of dot product and cross product", any open source model right now will do a perfectly fine job. By the time that the ad-pocalypse reaches AI answers, the models I mention will be at the point of being able to be run locally using consumer hardware. Probably every phone will run one.
I suspect in the next decade we will see the business model of "make money via advertising while trying/pretending to provide knowledge" become well and truly dead.
Adsense is just for little hobby websites, no actual businesses use it. They all use header bidding, which is (mostly) not controlled by Google.
I almost spit out my drink.
The current subscription situation for LLM stuff actually makes me hopeful.
If you did, that doesnt mean you should. If you can, that doesnt mean you should.
Dead Comment
The only inaccurate thing of that meme page is that you only need to uncheck 5 cookie "partners", when in reality there should be at least a few hundred.
Anyway, is it really not even possible to set up things like NoScript and uBlock Origin on mobile?
I needs to have 15+ to really capture that modern web experience.
Probably useless for mobile though, unless you can punch it in the omnibar with `javascript:` prefix
Most of those are like:
What is the price of the Switch 2?
The Switch 2 can be purchased with money. <Insert the Wikipedia article about currencies since the bronze age>
Let's start at the beginning. I was born in 1956 in Chicago. My mother was a cruel drunk and the only thing my father hated more than his work was his family.
It's so sad, cause it drags down good pages. I recently did a lot of research for camping and outdoor gear, and of course I started the journey from Google. But a few sites kept popping up, I really liked their reviews and the quality of the items I got based on that, so I started just going directly to them for comparisons and reviews. This is how it's supposed to work, IMHO.
jesus wept.
https://www.termsandconditions.game/
I never use those. I suspect that in many cases if there are "legitimate interest" options¹ those will remain opted-in.
----
[1] which I read as "we see your preference not to be stalked online, but fuck you and your silly little preferences we want to anyway"
[1] https://addons.mozilla.org/en-US/firefox/addon/consent-o-mat...
- "Please don't track me."
- "But what if we realllly want to?"
A normal response to that would be an even more resounding FCK NO, but somehow the EU came to the completely opposite conclusion.
I always use the inspect tool to just remove the popup. Interacting with it could be considered consent.
And to find that out you have to hit Page Down about twenty times, scanning as you, because the content is padded out to increase ad coverage.
Also, I felt like in long term that's going to kill off the good faith of all those smaller sites that are actually good, while the bigger ones still produce subpar contents.
ad-free? For now. That's just a matter of time.
Yesterday's SEO battles are today battles to convince LLMs to produce ad tokens. The corpus is already ridden of such content. And LLM make it even easier to produce more such spam.
Some annoyed me so much I even disabled JS for them on my phone. I do that more rarely because of how unnecessarily convoluted that setting is in Chromium browsers on Android. You have to navigate 4 levels deep in the settings and enter the domain you want to block into a text field!
For example, I have JS disabled on everything Substack (and it really annoys me when I end up on Substack hosted on a custom domain).
This is the huge one for me. If you search for something in natural language, the results you get on any search engine completely suck - yet ironically the AI overview is generally spot on. Search engines have been stuck in ~2003 for decades. Now the next 'breakthrough' is to use their LLMs to actually link to relevant content instead of using pagerank+ or whatever dysfunctional SEO'd algorithm they're still using.
It was with the best of intentions but cookie banners have done more to hurt web browsing than anything else in the last decade
Deleted Comment
The article is about Google (and other traditional search engines) snatching away clicks from web site owners. What you describe is AI tools (for lack of a better word[1]) snatching away traffic from the ruling gatekeepers of the web.
I think the latter is a much bigger shift and might well be the end of Google.
By extension it will be the end of SEO as we know it. A lot of discussion currently (especially on HN) is about how to keep the bad crawlers out and in general hide from the oh so bad AI guys. That is not unlike the early days of search engines.
I predict we will soon see a phase where this switches by 180° and everyone will see a fight to be the first one to be accessed to get an opportunity to gaslight the agent into their view of the world. A new three letter acronym will be coined, like AIO or something and we will see a shift from textual content to assets the AI tools can only link to.
Maybe this has already happened to some degree.
[1] Where would I put the boundary? Is Kagi the former or the latter? I'd say if a tool does a non-predetermined number of independent activities (like searches) on its own and only stops if some criteria are fulfilled it is clearly in the latter category.
In this model, only monetizable content will be generated though.
As much as we abhor what advertising has done to the web, at least it’s independent of content: pair quality content with ads, make money.
In the brave new AI search world, only content which itself is directly monetizable will be created. E.g. astroturf ads
The term you're looking for is GEO (Generative Engine Optimization), though your "AIO" is also used. It's the new frontier.
And you've nailed the 180° turn: the game is no longer about blocking crawlers but about a race to become their primary source. The goal is to be the one to "gaslight the agent" into adopting your view of the world. This is achieved not through old SEO tricks, but by creating highly structured, authoritative content that is easy for an LLM to cite.
Your point about shifting to "assets the AI tools can only link to" is the other key piece. As AI summarization becomes the norm, the value is in creating things that can't be summarized away: proprietary data, interactive tools, and unique video content. The goal is to become the necessary destination that the AI must point to.
The end of SEO as we know it is here. The fight for visibility has just moved up a layer of abstraction.
i can definitely see LLMs companies offering content creators a bump in training priority for a fee. It will be like ad-sales but you're paying for the LLM to consider your content at a higher priority than your competition.
I guess if my experience was as much degraded as yours I wouldn't bother with the web anymore, so yay for AI summarizers, at least for the time being. And don't get me wrong, a summarizer is a workaround, not a solution.
Sometimes it breaks the site so that you can't scroll or something, but that's quite rare. And most of the time it's solved by a refresh. Very infrequently you need to whitelist the site and then deal with the nag screen manually. A bit annoying, but way better than rawdogging it.
Works on desktop & mobile.
Clicking on a video to mute it also needs to navigate to a sponsor’s page and break the back button. And then the page reloads which doubles the page view count. Genius web dev decision. I bet they said “there’s literally no downsides to doing this!”
Also, the ads need to autoplay on full volume, often bypassing my system volume somehow so they can play even though the rest of the audio is on mute and none of the mute functionality works. Surely the user simply forgot they had mute on so we should just go ahead and fix that.
They also need to play on 4K ultra HD to use my entire monthly cell plan if I don’t stop it in the first 3 seconds, which I can’t do because the video has to fully load before I’m able to interact with it to click stop. Or clicking stop pauses it and then automatically restarts playing the video.
These webdev chrome devs need to stop adding new random features and start fixing the basic functionality. I don’t want fading rotating banners that save 3 lines of CSS. I want the “DO NOT AUTOPLAY. EVER.” Button to actually work.
I might come back later though.
https://marginalia-search.com/site/www.fontstruct.com?view=t...
https://marginalia-search.com/search?query=special%3Apopover...
My web experience has been reduced to a handful of bookmarks, X, and chatgpt or grok. Occasionally I’ll go looking for government sites to validate something I read on X. Everything else is noise
This is naturally not addressed in the US "AI" Action Plan, same as copyright theft.
Now. Nothing stopping them from injecting ads in their summary. And chances are that they will eventually
for now! and we should enjoy it while it lasts. Ad-driven AIs are coming, it is inevitable.
AI stole all the content from those websites, starving them from ad revenue.
The Google overview is made by the same company that puts those ads in those websites in first place.
What is coming next is that there will be ads in the overview and you will have no choice but to read it because all its cited links will be rotten.
Those data centers don't pay for themselves, you know.
1. Open safari
2. Type something so that it goes search google
3. A web results page appears
4. Immediately a popup appears with two buttons:
- They have the same size
- One is highlighted in blue and it says CONTINUE
- The other is faint and reads "Stay in browser" (but in my native language the distinction is even less clear)
5. Clicking CONTINUE means "CONTINUE in the app", so it takes me to the Google App (or, actually, to the app store, because I don't have this app), but this does not end there!
6. If I go back to the browser to try to fucking use google on my fucking browser, as I fucking wanted to, I realize that doing "Back" now constantly moves me to the app (or app store). So, in effect, I can never get the search results once I have clicked continue. The back button has been highjacked (long pressing does not help). My only option is to NEVER click continue
7. Bonus: All of this happens regardless of my iPhone having the google app installed or not
So: Big button that says "CONTINUE" does not "CONTINUE" this action (it, of course, "CONTINUES" outside).
I just want to FUCKING BROWSE THE WEB. If I use the google app, then clicking a link presumably either keeps me in its specific view of the web (outside of my browser), or it takes me out of the app. This is not the experience I want. I have a BROWSER for a reason (e.g. shared groups/tabs...)
Oh! And since this happens even if I don't have the app, it takes me to the app store. If I install the app via the app store, it then DOES NOT have any mechanism to actually "Continue". It's a fresh install. And, of course, if I go back to the browser and hit "back", I can't.
So for users who DO NOT HAVE THE APP, this will NEVER LET THEM CONTINUE. It will PREVENT THEM FROM USING GOOGLE. And it will force them to do their query AGAIN.
Did the people who work on this feature simply give up? What. The. Fuck?
This behavior seems to happen on-and-off, as if google is gaslighting me. Sometimes it happens every time I open Safari. Some other times it goes for days without appearing. Sometimes in anonymous tabs, sometimes not. Logged in or not, I've seen both scenarios.
I can't be sure, but I genuinely believe that the order of the buttons has been swapped, messing with my muscle memory.
Basically it's this image: https://www.reddit.com/r/iphone/comments/1m76elp/how_do_i_st...
Except a still image cannot describe the excruciating process of dealing with it — especially realizing "oh, wait, I clicked the wrong button, oh wait, no no no, get out of the app store, oh oh oh what did I type again? Damn I lost it all!..."
[1]I would quit before implementing this feature. It disgusts me, and we're talking about google, not some run-of-the-mill company whom you have to work for to barely survive. This is absolutely shameful.
A year ago my ad-supported website had 100,000 monthly active users. Now, like the article says, traffic is down 40% thanks to Google AI Overview zero clicks. There's loss of revenue, yes, but apart from that, I'm wondering how people can find my work, if I produce more? They seldom click through on the "source" attributes, if any.
I wonder, am I standing at the gates of hell in a line that includes Tower Records and Blockbuster? Arguably because I'm among those that built this dystopia with ever-so-helpful technical content.
Maybe, but there’s a big difference - Netflix doesn’t rely on Blockbuster, and Spotify doesn’t need Tower Records. Google AI results do need your articles, and it returns the content of them to your readers without sending you the traffic. And Google is just trying to fend off ChatGPT and Meta and others, who absolutely will, if allowed, try to use their AI to become the new search gateways and supplant Google entirely.
This race will continue as long as Google & OpenAI & everyone else gets to train on your articles without paying anything for them. Hopefully in the future, AI training will either be fully curated and trained on material that’s legal to use, or it will license and pay for the material they want that’s not otherwise free. TBH I’m surprised the copyright backlash hasn’t been much, much bigger. Ideally the lost traffic you’re seeing is back-filled with licensing income.
I guess you can rest a little easier since we got to where we are now not primarily because of technical means but mostly by allowing mass copyright violation. And maybe it helps a little to know that most content-producing jobs in the world are in the same boat you are, including the programmers in your target audience. That’s cold comfort, but OTOH the problem you (we) face is far more likely to be addressed and fixed than if it was only a few people affected.
Even when you have them dead to rights (like with the Whisper hallucinations) the legal argument is hard to make. Besides, the defendants have unfathomable resources.
As sad as it is, I think we're looking at the end of the open internet as we've known it. This is massive tragedy of the commons situation and there seems to be roughly zero political will to enact needed regulations to keep things fair and sustainable. The costs of this trend are massive, but they are spread out across many millions of disparate producers and consumers, while the gains are extremely concentrated in the hands of the few; and those few have good lobbyists.
The beginning of the end was including Wikipedia entries directly in the search results, although arguably even some of the image results are high quality enough to warrant skipping visiting the actual website (if you were lucky enough to get the image at the target site in the first place) So maybe it goes back sooner than that.
It does speak to one of the core problems with AI is the one time productivity boost from using all historical data created by humans is no longer going to be as useful going forward since individual contributors will no longer build and provide that information unless the incentive models change.
Every year they put the threshold higher and it results in more and more people getting burned. Of course the big, established brands are protected.
So they don't want the average joe's opinion. And they don't want to funnel money to you, now that you have fulfilled your purpose.
It happens for all VC based products since the drive on returns of invested capital is so high.
Put another way -- early stage products that every uses and love (in most not all cases) should not be assumed to be the end product.
Yes, maybe a small amount of people ultimately contributing but if their input is truly novel and “true” then what’s the downside?
If you still have a connection to your readers (e.g. email) you can still reach them. If they've formed a community, even better. If not, its a good time to work on that.
Google doesn't really have that. I have zero sense of community with Google. And that's why they'll die if something doesn't change.
Simple content that can be conveyed in a few succinct lines of text (like how to uninstall Homebrew) is actually one of the great use cases for AI summaries.
I’m sorry that it’s losing you revenue, but I’d much rather get a quick answer from AI than have to roll the dice on an ad-supported search result where I have to parse the layout, dodge the ads, and extract the relevant info from the filler content and verbiage
Utopian fantasy: interact with the ai - novel findings are registered as such and "saved" and made available to others.
Creative ideas are registered as such, if possible, theyre tested in "side quests" ie the ai asks - do you have 5min to try this? You unblock yourself if it works & see in the future how many others profited as well (3k people read this finding).
Its all a logistics question
Deleted Comment
If I create content like recipes, journalism etc, previously I had exclusive rights to my created content and could monetise it however I wanted. This has mostly led to what we have today, some high quality content, lots of low quality content, mostly monetised through user hostile ads.
Previously, if I wanted to take a recipe from "strawberry-recipes.cool" and published it on my own website with a better user experience, that wouldn't have been allowed because of copyright rules. I still can't do that, but Google can if it's done through the mechanism of AI summaries.
I think the worst case scenario is that people stop publishing content on the web altogether. The most likely one is that search/summary engines eat up money that previously came from content creators. The best one is that we find some alternative, third way, for creators to monotise content while maintaining discoverability.
I'm not sure what will happen, and I'm not denying the usefulness of AI summaries, but it feels easy to miss that, at their core, they're a fundamental reworking of the current economics of the internet.
This would be lovely.
> I think the worst case scenario is that people stop publishing content on the web altogether. The most likely one is that search/summary engines eat up money that previously came from content creators.
More than likely, people return to publishing content because they love the subject matter and not because it is an angle to “create content” or “gain followers” or show ads. No more “the top 25 hats in July 2025” AI slopfest SEO articles when I look for a hat, but a thoughtful series of reviews with no ads or affiliate links, just because someone is passionate about hats. The horror! The horror!
Why would you do that if you thought it was going to be hoovered up by some giant corporation and spat out again for $20 a month with no attribution.
I agree the current model sucks, but I think it being replaced is only good if it's replaced with something better.
> More than likely, people return to publishing content because they love the subject matter
I'd love the idea of people doing things because they're passionate, but I feel a little unsure about people doing things because they're passionate, generating money from those things, but all that money going to AI summariser companies. I think there's some pretty serious limits too, journalists risk their safety a lot of the time, and I can't see a world where that happens purely out of "passion" without any renumeration. Aside from anything else, some acts of journalism like overseas reporting etc, isn't compatible with working a seperate "for-pay" job.
Someone who publishes content because they love the subject matter would only reach enough of an audience to have an impact if they work on it, a lot, and most people wouldn't do that without some expectation of return on investment, so they'd follow the influencer / commercial publication playbook and end up in the same place as the established players in the space are already.
If you're satisfied of being on the 50th page on the Google results, then that's fine. Nobody will find you though.
The web is on a trajectory where a local dyi zine will reach as many readers as an open website. It might even be cheaper than paying for a domain+hosting once that industry contracts and hosting plans aren't robust enough to keep up with requests from vibe-coded scrapers.
I disagree with that. There are still people out there doing that out of passion, that hasn't changed (it's just harder to find). Bad actors who are only out there for the money will continue trying to get the money. Blogs might not be relevant anymore, but social media influencing is still going to be a thing. SEO will continue to exist, but now it's targeted to influence AIs instead of the position in Google search results. AIs will need to become (more) profitable, which means they will include advertising at some point. Instead of companies paying Google to place their products in the search or influencers through affiliate links, they will just pay AI companies to place their products in AI results or influencers to create fake reviews trying to influence the AI bots. A SEO slop article is at least easy to detect, recommendations from AIs are much harder to verify.
Also it's going to hit journalism. Not everyone can just blog because they are passionate about something. Any content produced by professionals is either going to be paywalled even more or they need to find different sources of income threatening journalistic integrity. And that gives even more ways to bad actors with money to publish news in their interest for free and gaining more influence on the public debate.
Recipes are not protected by copyright law. That's _why_ recipe bloggers have resorted to editorialising recipes, because the editorial content is copyrightable.
Quite clearly heading in that direction, but with a twist: the only people left will be advertising or propaganda, if there's no money in authenticity or correctness.
And this is the reason why Google took its sweet time to counter OpenAI's GPT3. They _had_ to come up with this, which admittedly disrupts the publishers business model but at least if Google is successful they will keep their moat as the first step in any sales funnel.
This is not true, you absolutely could have republished a recipe with your own wording and user experience.
I'm guilty of not clicking when I'm satisfied with the AI answer. I know it can be wrong. I've seen it be wrong multiple times. But it's right at the top and tells me what I suspected when I did the search. The way they position the AI overview is right in your face.
I would prefer the "AI overview" to be replaced with something that helps me better search rather than giving me the answer directly.
Which also introduces the insidious possibility that AI summaries will be designed to confirm biases. People already use AI chat logs to prove stuff, which is insane, but it works on some folks.
Hell will freeze over first
1. The anchor icon.
2. Then one of the sites that appear on the right (on desktop).
And yet, "the algorithm" has always been their first defense whenever they got a complaint or lawsuit about search results; I suspect that when (not if) they get sued over this, they will do the same. Treating their algorithms and systems as a mysterious, somewhat magic black box.
And there's no AI garbage sitting in the top of the engine.
Searching for “who is Roger rabbit” gives me Wikipedia, IMDb and film site as results.
Searching for “who is Roger rabbit?” gives me a “quick answer” LLM-generated response: “Roger Rabbit is a fictional animated anthropomorphic rabbit who first appeared in Gary K. Wolf's 1981 novel…” followed by a different set of results. It seems the results are influenced by the sources/references the LLM generated.
In your case, I think that it is just the interrogation point in itself at the end that somehow has an impact on the results you see.
However, it's pretty bad for local results and shopping. I find that anytime I need to know a local stores hours or find the cheapest place to purchase an item I need to pivot back to google. Other than that it's become my default for most things.
I’ve also tried various engines over the years. Kagi was the first one that didn’t have me needing to go back to Google. I regularly find things that people using Google seem to not find. The Assistant has solved enough of my AI needs that I don’t bother subscribing to any dedicated AI company. I don’t miss Google search at all.
I do still using Google Maps, as its business data still seems like the best out there, and second place isn’t even close. Kagi is working on their own maps, but that will be long road. I’m still waiting for Apple to really go all-in, instead of leaning on Yelp.
Apple really needs to update Safari to let people choose their search engine, instead of just having the list of blessed search engines to choose from.
6 months ago, "what temp is pork safe at?" was a few clicks, long SEO optimised blog post answers and usually all in F not C ... despite Google knowing location ... I used it as an example at the time of 'how hard can this be?'
First sentance of Google AI response right now: "Pork is safe to eat when cooked to an internal temperature of 145°F (63°C)"
If you made a bet with your friend and are using the AI overview to settle it, fine. But please please click on an actual result from a trusted source if you’re deciding what temperature to cook meat to
But SEO slop machines have made it so hard to find the good websites without putting in more legwork than makes sense a lot of the time. Funnily enough, this makes AI look like a good option to cut through all the noise despite its hallucinations. That's obviously not acceptable when it comes to food safety concerns though.
So, I'd agree -- safety info from an LLM is bad. But generally, the /flavor/ (heh) of information that such data comprises is REALLY good to get from LLMs (as opposed to nuanced opinions or subjective feedback).
After all, AI can theoretically ask follow-up questions that are relevant, can explain subtleties peculiar to a specific situation or request, can rephrase things in ways that are clearer for the end user.
Btw, "What temperature should a food be cooked to" is a classic example of something where lots of people and lots of sources repeat incorrect information, which is often ignored by people who actually cook. Famously, the temp that is often "recommended" is only the temp at which bacteria/whatever is killed instantly - but is often too hot to make the food taste good. What is normally recommended is to cook to a lower temperature but keep the food at that temperature for a bit longer, which has the same effect safety-wise but is much better.
https://en.m.wikipedia.org/wiki/Mett
When I searched for the safe temperature for pork (in German), I found this as the first link (Kagi search engine)
> Ideally, pork should taste pink, with a core temperature between 58 and 59 degrees Celsius. You can determine the exact temperature using a meat thermometer. Is that not a health concern? Not anymore, as nutrition expert Dagmar von Cramm confirms: > “Trichinae inspection in Germany is so strict — even for wild boars — that there is no longer any danger.”
https://www.stern.de/genuss/essen/warum-sie-schweinefleisch-...
Stern is a major magazine in Germany.
1. https://www.foodsafety.asn.au/australians-clueless-about-saf... 2. https://www.foodsafety.gov/food-safety-charts/safe-minimum-i... 3. https://pork.org/pork-cooking-temperature/
All three were highly informative, well cited sources from reputable websites.
I then chatted that back to it, and it was like, oh ya, I made a mistake, you're right, sorry.
Anyways, luckily I did not get sick.
Moral of the story, don't get mentally lazy and use AI to save you the brain it takes for simple answers.
Why are people downvoting this? I’ve literally never seen anyone use a thermometer to cook a burger or steak or pork chop. A whole roasted turkey, sure.
Why would you purchase meat that you suspect is diseased? Even if you cook it well-done, all the (now dead) bacteria and their byproducts are still inside. I don't understand why people do this to themselves? If I have any suspicion about some meat, I'll throw it away. I'm not going to cook it.
People have been eating pork for over 40,000 years. There’s speculation about whether pork or beef was first a part of the human diet.
(5000 words later)
The USDA recommends cooking pork to at least 145 degrees.
First result under the overview is the National Pork Board, shows the answer above the fold, and includes visual references: https://pork.org/pork-cooking-temperature/
Most of the time if there isn't a straightforward primary source in the top results, Google's AI overview won't get it right either.
Given the enormous scale and latency constraints they're dealing with, they're not using SOTA models, and they're probably not feeding the model 5000 words worth of context from every result on the page.
Maybe they could just show the links that match your query and skip the overview. Sounds like a billion-dollar startup idea, wonder why nobody’s done it.
Trust it if you want I guess. Be cautious though.
First result: https://www.porkcdn.com/sites/porkbeinspired/library/2014/06...
Second result: https://pork.org/pork-cooking-temperature/
AI: 63C
First result: Five year old reddit thread (F only discussion, USDA mentioned).
Second result: ThermoWorks blog (with 63C).
Third result: FoodSafety.gov (with 63C)
Forth result: USDA (with 63C)
Seems reasonable enough to scan 3-4 results to get some government source.
I know you can’t necessarily trust anything online, but when the first hit is from the National Pork Board, I’m confident the answer is good.
No it wasn't, most of the first page results have the temperature right there in the summary, many of them with both F and C, and unlike the AI response, there is much lower chance of hallucinated results.
So you've gained nothing
PS Trying the same search with -ai gets you the full table with temperatures, unlike with the AI summary where you have to click to get more details, so the new AI summary is strictly worse
Deleted Comment
Deleted Comment
The problem is there is no money and fame in using it that way, or at least so people think in the current moment. But we could return to enforcing some sort of clear, pro-reader writing and bury the 2010s-2020s SEO garbage on page 30.
Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here.
We could.
But it will absolutely not happen unless and until it can be more profitable than Google's current model.
What's your plan?
> Not the mention that the LLMs randomly lie to you with less secondary hints at trustworthiness (author, website, other articles, design etc.) than you get in any other medium. And the sustainability side of incentivizing people to publish anything. I really see the devil of convenience as the only argument for the LLM summaries here.
Well, yes. That's the problem. Why rely on the same random liars as taste-makers?
> The next full moon in New York will be on August 9th, 2025, at 3:55 a.m.
"full moon time LA"
> The next full moon in Los Angeles will be on August 9, 2025, at 3:55 AM PDT.
I mean, it certainly gives an immediate answer...
https://www.bbc.co.uk/news/articles/cd11gzejgz4o
When our grandmothers and grandfathers were growing up, there was a real threat to their health that we don’t face anymore. No, I’m not talking about the lack of antibiotics, nor the scarcity of nutritious food. It was trichinosis, a parasitic disease that used to be caught from undercooked pork.
The legitimate worry of trichinosis led their mothers to cook their pork until it was very well done. They learned to cook it that way and passed that cooking knowledge down to their offspring, and so on down to us. The result? We’ve all eaten a lot of too-dry, overcooked pork.
But hark! The danger is, for the most part, past, and we can all enjoy our pork as the succulent meat it was always intended to be. With proper temperature control, we can have better pork than our ancestors ever dreamed of. Here, we’ll look at a more nuanced way of thinking about pork temperatures than you’ve likely encountered before."
Sorry, what temperature was it again?
Luckily there's the National Pork Board which has bought its way to the top, just below the AI overview. So this time around I won't die from undercooked pork at least.
Deleted Comment