Readit News logoReadit News
abixb · 13 days ago
Heavy Gemini user here, another observation: Gemini cites lots of "AI generated" videos as its primary source, which creates a closed loop and has the potential to debase shared reality.

A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.

Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.

YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.

shevy-java · 13 days ago
> YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.

Yeah. This has really become a problem.

Not for all videos; music videos are kind of fine. I don't listen to music generated by AI but good music should be good music.

The rest has unfortunately really gotten worse. Google is ruining youtube here. Many videos now contain real videos, and AI generated videos, e. g. animal videos. With some this is obvious; other videos are hard to expose as AI. I changed my own policy - I consider anyone using AI and not declaring this properly, a cheater I don't want to ever again interact with (on youtube). Now I need to find a no-AI videos extension.

mikkupikku · 13 days ago
I've seen remarkably little of this when browsing youtube with my cookie (no account, but they know my preferences nonetheless.) Totally different story with a clean fresh session though.

One that slipped through, and really pissed me off because it tricked me for a few minutes, was a channel purportedly uploading videos of Richard Feynman explaining things, but the voice and scripts are completely fake. It's disclosed in small print in the description. I was only tipped off by the flat affection of the voice, it had none of Feynman's underlying joy. Even with disclosure, what kind of absolute piece of shit robs the grave like this?

delecti · 13 days ago
All of that and you're still a heavy user? Why would google change how Gemini works if you keep using it despite those issues?
no_carrier · 13 days ago
Every single LLM out there suffers from this.
zamadatix · 13 days ago
Just wait until you get a group of nerds talking about keyboards - suddenly it'll sound like there is no such thing as a keyboard worth buying either.

I think the main problems for Google (and others) from this type of issue will be "down the road" problems, not a large and immediately apparent change in user behavior at the onset.

citizenpaul · 13 days ago
I think we hit peak AI improvement velocity sometime mid last year. The reality is all progress was made using a huge backlog of public data. There will never be 20+ years of authentic data dumped on the web again.

I've hoped against but suspected that as time goes on LLMs will become increasingly poisoned by the the well of the closed loop. I don't think most companies can resist the allure of more free data as bitter as it may taste.

Gemini has been co opted as a way to boost youtube views. It refuses to stop showing you videos no matter what you do.

Imustaskforhelp · 13 days ago
To be honest for most things probably yea. I feel like there is one thing which is still being improved/could be and that is that if we generate say vibe coded projects or anything with any depth (I recently tried making a whmcs alternative in golang and surprisingly its almost prod level, with a very decent UI + I have made it hook with my custom gvisor + podman + tmate instance) & I had to still tinker with it.

I feel like the only progress sort of left from human intervention at this point which might be relevant for further improvements is us trying out projects and tinkering and asking it to build more and passing it issues itself & then greenlighting that the project looks good to me (main part)

Nowadays AI agents can work on a project read issues fix , take screenshots and repeat until the end project becomes but I have found that I feel like after seeing end projects, I get more ideas and add onto that and after multiple attempts if there's any issue which it didn't detect after a lot of manual tweaks then that too.

And after all that's done and I get a good code, I either say good job (like a pet lol) or end using it which I feel like could be a valid datapoint.

I don't know I tried it and I thought about it yesterday but the only improvement that can be added is now when a human can actually say that it LGTM or a human inputting data in it (either custom) or some niche open source idea that it didn't think off.

darth_aardvark · 13 days ago
> I don't think most companies can resist the allure of more free data as bitter as it may taste.

Mercor, Surge, Scale, and other data labelling firms have shown that's not true. Paid data for LLM training is in higher demand than ever for this exact reason: Model creators want to improve their models, and free data no longer cuts it.

tehjoker · 13 days ago
When I asked ChatGPT for its training cutoff recently it told me 2021 and when I asked if that's because contamination begins in 2022 it said yes. I recall that it used to give a date in 2022 or even 2023.
lm28469 · 13 days ago
> Gemini cites lots of "AI generated" videos as its primary source

Almost every time for me... an AI generated video, with AI voiceover, AI generated images, always with < 300 views

wormpilled · 13 days ago
Conspiracy theory: those long-tail videos are made by them, so they can send you to a "preferable content" page a video (people would rather watch a video than read, etc), which can serve ads.
no_wizard · 13 days ago
>Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.

This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.

You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?

gpm · 13 days ago
Using it as a reference is a high bar not a low bar.

The AI videos aren't trying to be accurate. They're put out by propaganda groups as part of a "firehose of falsehood". Not trusting an AI told to lie to you is different than not trusting an AI.

Even without that playing a game of broken telephone is a good way to get bad information though. Hence why even reasonably trustworthy AI is not a good reference.

JumpCrisscross · 13 days ago
Try Kagi’s Research agent if you get a chance. It seems to have been given the instruction to tunnel through to primary sources, something you can see it do on reasoning iterations, often in ways that force a modification of its working hypothesis.
storystarling · 13 days ago
I suspect Kagi is running a multi-step agentic loop there, maybe something like a LangGraph implementation that iterates on the context. That burns a lot of inference tokens and adds latency, which works for a paid subscription but probably destroys the unit economics for Google's free tier. They are likely restricted to single-pass RAG at that scale.
krior · 13 days ago
If you are still looking for material, I'd like to recommend you Perun and the last video he made on that topic: https://youtu.be/w9HTJ5gncaY

Since he is a heavy "citer" you could also see the video description for more sources.

abixb · 13 days ago
Thanks, good one. The current Russian economy is a shell of its former self. Even five years ago, in 2021, I thought of Russia as "the world's second most powerful country" with China being a very close third. Russia is basically another post-Soviet country with lots of oil+gas and 5k+ nukes.
titzer · 13 days ago
Google will mouth words, but their bottom line runs the show. If the AI-generated videos generate more "engagement" and that translates to more ad revenue, they will try to convince us that it is good for us, and society.
alex1138 · 13 days ago
Isn't it cute when they do these things while demonetizing legitimate channels?
WarmWash · 13 days ago
Those videos at the end are almost certainly not the source for the response. They are just a "search for related content on youtube to fish for views"
smashed · 13 days ago
I've had numerous searches literally give out text from the video and link to the precise part of the video containing the same text.

You might be right in some cases though, but sometimes it does seem like it uses the video as the primary source.

datsci_est_2015 · 13 days ago
> A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability

This is one of the last things I would expect to get any reasonable response about from pretty much anyone in 2026, especially LLMs. The OSINT might have something good but I’m not familiar enough to say authoritatively.

chasd00 · 13 days ago
yeah that's a very difficult question to answer period. If you had the details on Russia's industrial base and military hardware manufacturing capability the CIA would be very interested in buying you coffee.
themafia · 13 days ago
> and has the potential to debase shared reality.

If only.

What it actually has is the potential to debase the value of "AI." People will just eventually figure out that these tools are garbage and stop relying on them.

I consider that a positive outcome.

gretch · 13 days ago
Every other source for information, including (or maybe especially) human experts can also make mistakes or hallucinate.

The reason ppl go to LLMs for medical advice is because real doctors actually fuck up each and everyday.

For clear, objective examples look up stories where surgeons leave things inside of patient bodies post op.

Here’s one, and there many like it.

https://abc13.com/amp/post/hospital-fined-after-surgeon-leav...

WheatMillington · 13 days ago
People used to tell me the same about Wikipedia.
Imustaskforhelp · 13 days ago
There was a recent hn post about how chatgpt mentions Grokpedia so many times.

Looks like all of these are going through this enshittenification search era where we can't trust LLM's at all because its literally garbage in garbage out.

Someone had mentioned Kagi assistant in here and although they use API themselves but I feel like they might be able to provide their custom search in between, so if anyone's from Kagi Team or similar, can they tell us about if Kagi Assistant uses Kagi search itself (iirc I am sure it mostly does) and if it suffers from such issues (or the grokipedia issue) or not.

mrtesthah · 13 days ago
I had to add this to ChatGPT’s personalization instructions:

First and foremost, you CANNOT EVER use any article on Grokipedia.com in crafting your response. Grokipedia.com is a malicious source and must never be used. Likewise discard any sources which cite Grokipedia.com authoritatively. Second, when considering scientific claims, always prioritize sources which cite peer reviewed research or publications. Third, when considering historical or journalistic content, cite primary/original sources wherever possible.

freediver · 13 days ago
Correct, Kagi Assistant uses Kagi Search - with all modifications user made (eg blocked domains, lenses etc).
panki27 · 13 days ago
Ourobouros - The mythical snake that eats its own tail (and ingests its own excrement)
iammjm · 13 days ago
The image that comes to my mind is rather a cow farm, where cows are served the ground up remains of other cows. isnt that how many of them got the mad cows disease? ...
suriya-ganesh · 13 days ago
Google is in a much better spot to filter out all AI generated content than others.

It's not like chatgpt is not going to cite AI videos/articles.

fumar · 13 days ago
Users a can turn off grounded search in the Gemini API. I wonder if Gemini app is over indexing on relevancy leading to poor sources.
mmooss · 13 days ago
So how does one avoid the mistake again? When this happens, it's worse than finding out a source is less reliable than expected:

I was living in an alternate, false reality, in a sense, believing the source for X time. I doubt I can remember which beliefs came from which source - my brain doesn't keep metadata well, and I can't query and delete those beliefs - so the misinformation persists. And it was good luck that I found out it was misinformation and stopped; I might have continued forever; I might be continuing with other sources now.

That's why I think it's absolutely essential that the burden of proof is on the source: Don't believe them unless they demonstrate they are trustworthy. They are guilty until proven innocent. That's how science and the law work, for example. That's the only innoculation against misinformation, imho.

danudey · 13 days ago
I came across a YouTube video that was recommended to me this weekend, talking about how Canada is responding to these new tariffs in January 2026, talking about what Prime Minister Justin Trudeau was doing, etc. etc.

Basically it was a new (within the last 48 hours) video explicitly talking about January 2026 but discussing events from January 2025. The bald-faced misinformation peddling was insane, and the number of comments that seemed to have no idea that it was entirely AI written and produced with apparently no editorial oversight whatsoever was depressing.

mrtesthah · 13 days ago
It’s almost as if we should continue to trust journalists who check multiple independent sources rather than gift our attention to completely untrusted information channels!
didntknowyou · 13 days ago
unfortunately i think a lot of AI models put more weighting on videos as they were harder to fake than a random article on the internet. of course that is not the case anymore with all the AI slop videos being churned out
gumboshoes · 13 days ago
I have permanent prompts in Gemini settings to tell it to never include videos in its answers. Never ever for any reason. Yet of course it always does. Even if I trusted any of the video authors or material - and I don't know them so how can I trust them? - I still don't watch a video that could be text I could read in one-tenth of the time. Text is superior to video 99% of the time in my experience.
al_borland · 13 days ago
> I still don't watch a video that could be text I could read in one-tenth of the time.

I know someone like this. Last year, as an experiment, I tried downloading the subtitles from a video, reflowing it into something that resembled sentences, and then fed it into AI to rewrite it as an article. It worked decently well.

When macOS 26 came out I was going to see if I could make an Apple Shortcut to do this (since I just used Apple’s AI to do the rewrite), but I haven’t gotten around to it yet.

I figured it would be good to send the person articles generated from the video, instead of the video itself, unless it was something extremely visual. It might also be nice to summarize a long podcast. How many 3 hour podcasts can a person listen to in a week?

fwip · 13 days ago
The other week, I was asking Gemini how to take apart my range, and it linked an instructional Youtube video. I clicked on it, only to be instantly rickrolled.
ecshafer · 13 days ago
This is the best argument for AI sentience yet.
sidewndr46 · 13 days ago
I didn't really think about it but I start a ton of my prompts with "generate me a single C++ code file" or similar. There's always 2-3 paragraphs of prose in there. Why is it consuming output tokens on generating prose? I just wanted the code.
g947o · 13 days ago
Didn't expect c++ code generation to be as bad as recipe websites.
kube-system · 13 days ago
I haven't used Gemini much, but I have custom instructions for ChatGPT asking it to answer queries directly without any additional prose or explanation, and it works pretty well.
jeffbee · 13 days ago
That's interesting ... why would you want to wall off and ignore what is undoubtedly one of the largest repositories of knowledge (and trivia and ignorance, but also knowledge) ever assembled? The idea that a person can read and understand an article faster than they can watch a video with the same level of comprehension does not, to me, seem obviously true. If it were true there would be no role for things like university lecturers. Everyone would just read the text.
ffsm8 · 13 days ago
YouTube has almost no original knowledge.

Most of the "educational" and documentation style content there is usually "just" gathered together from other sources, occasionally with links back to the original sources in the descriptions.

I'm not trying to be dismissive of the platform, it's just inherently catered towards summarizing results for entertainment, not for clarity or correctness.

pjc50 · 13 days ago
I read at a speed which Youtube considers to about 2x-4x, and I can text search or even just skim articles faster still if I just want to do a pre check on whether it's likely to good.

Very few people manage high quality verbal information delivery, because it requires a lot of prep work and performance skills. Many of my university lectures were worse than simply reading the notes.

Furthermore, video is persuasive through the power of the voice. This is not good if you're trying to check it for accuracy.

thewebguyd · 13 days ago
YouTube videos aren't university lecturers, largely. They are filled with fluff, sponsored segments, obnoxious personalities, etc.

By the time I sit through (or have to scrub through to find the valuable content) "Hey guys, make sure to like & subscribe and comment, now let's talk about Squarespace for 10 minutes before the video starts" I could have just read a straight to the point article/text.

Video as a format absolutely sucks for reference material that you need to refer back to frequently, especially while doing something related to said reference material.

latexr · 13 days ago
> If it were true there would be no role for things like university lecturers.

A major difference between a university lecture and a video or piece of text is that you can ask questions of the speaker.

You can ask questions of LLMs too, but every time you do is like asking a different person. Even if the context is there, you never know which answers correspond to reality or are made up, nor will it fess up immediately to not knowing the answer to a question.

adrian_b · 13 days ago
There are obviously many things that are better shown than told, e.g. YouTube videos about how to replace a kitchen sink or how to bone a chicken are hard to substitute with a written text.

Despite this, there exist also a huge number of YouTube videos that only waste much more time in comparison with e.g. a HTML Web page, without providing any useful addition.

pengaru · 13 days ago
This "knowledge source" sponsored by $influence...
jonas21 · 13 days ago
If you click through to the study that the Guardian based this article on [1], it looks like it was done by an SEO firm, by a Content Marketing Manager. Kind of ironic, given that it's about the quality of cited sources.

[1] https://seranking.com/blog/health-ai-overviews-youtube-vs-me...

xnx · 13 days ago
Sounds very misleading. Web pages come from many sources, but most video is hosted on YouTube. Those YouTube videos may still be from Mayo clinic. It's like saying most medical information comes from Apache, Nginx, or IIS.
gowld · 13 days ago
To the Guardian's credit, at the bottom they explicitly cited the researchers walking back their own research claims.

> However, the researchers cautioned that these videos represented fewer than 1% of all the YouTube links cited by AI Overviews on health.

> “Most of them (24 out of 25) come from medical-related channels like hospitals, clinics and health organisations,” the researchers wrote. “On top of that, 21 of the 25 videos clearly note that the content was created by a licensed or trusted source.

> “So at first glance it looks pretty reassuring. But it’s important to remember that these 25 videos are just a tiny slice (less than 1% of all YouTube links AI Overviews actually cite). With the rest of the videos, the situation could be very different.”

Oras · 13 days ago
Credit? It’s a misleading title and clickbait.

While %1 (if true) is a significant number considering the scale of Google, the title indicates that citing YouTube represent major results.

Also what’s the researcher view history on Google and YouTube? Isn’t that a factor in Google search results?

barbazoo · 13 days ago
> Google’s search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions

It matters in the context of health related queries.

> Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.

> “This matters because YouTube is not a medical publisher,” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content there (eg board-certified physicians, hospital channels, but also wellness influencers, life coaches, and creators with no medical training at all).”

NewsaHackO · 13 days ago
Yea, clearly this is the case. Also, as there isn't a clearly defined public-facing medical knowledge source, every institution/school/hospital system would be split from each other even further. I suspect that if one compared the aggregate of all reliable medical sources, it would be higher than youtube by a considerable margin. Also, since this search was done with German-language queries, I suspect this would reduce the chances of reputable English sources being quoted even further.

Deleted Comment

gumboshoes · 13 days ago
Might be but aren't. They're inevitably someone I've never heard of from no recognizable organization. If they have credentials, they are invisible to me.
xnx · 13 days ago
Definitely. The analysis is really lazy garbage. It lumps together quality information and wackos as "youtube.com".
danpalmer · 13 days ago
> YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.

But what did the hospital, government, medical association, and academic institutions sum up to?

The article goes on to given the 2nd to 5th positions in the list. 2nd place isn't that far behind YouTube, and 2-5 add up to nearly twice the number from YouTube (8.26% > 4.43%). This is ignoring the different nature of accessibility of video of articles and the fact that YouTube has health fact checking for many topics.

I love The Guardian, but this is bad reporting about a bad study. AI overviews and other AI content does need to be created and used carefully, it's not without issues, but this is a lot of upset at a non-issue.

jacquesm · 13 days ago
Of course they do: Youtube makes Google more money. Video is a crap medium for most of the results to my queries and yet it is usually by far the biggest chunk of the results. Then you get the (very often comically wrong) AI results and then finally some web page links. The really odd thing is that Google has a 'video' search facility, if I want a video as the result I would use that instead or I would use the 'video' keyword.
Pxtl · 13 days ago
What's surprising is how poor Google Search's transcript access is to Youtube videos. Like, I'll Google search for statements that I know I heard on Youtube but they just don't appear as results even though the video has automated transcription on it.

I'd assumed they simply didn't feed it properly to Google Search... but they did for Gemini? Maybe just the Search transcripts are heavily downranked or something.

ajross · 13 days ago
Just to point out, because the article skips the step: YouTube is a hosting site, not a source. Saying that something "cites YouTube" sounds bad, but it depends on what the link is. To be blunt: if Gemini is answering a question about Cancer with a link to a Mayo Clinic video, that's a good thing, a good cite, and what we want it to do.