Google is taking a different approach this time, moving quickly. While NotebookLM is indeed a remarkable tool for personal productivity and learning, it also opens the door for spammers to mass-produce content that isn't meant for human consumption.
Amidst all the praise for this project, I’d like to offer a different perspective. I hope the NotebookLM team sees this and recognizes the seriousness of the spam issue, which will only grow if left unaddressed. If you know someone on the team, please bring this to their attention - Could you please provide a tool or some plain-English guidelines to help detect audio generated by NotebookLM? Is there a watermark or any other identifiable marker that can be used?
Where do you get the "low-quality" part from - my experience with NotebookLM is that they create much higher quality, more informative, more fact based, and more concise podcasts than 99% of the stuff I listen to. I've mostly switched entirely over to NotebookLM for my podcast listening. They, generally, offer a far higher quality experience from my perspective.
Maybe you have the problem backwards - we accidentally end up listening to non NotebookLM podcasts?
Personally, I hate even the idea of an AI made podcast, because to me podcasts are personal and emotional. They're about the individual humans who make them. They're not just a source of "information".
It's interesting assumption that by virtue of being AI generated, it's considered bad/fake. 20 years ago, people hated how photoshop changed the photo design industry, NotebookLM is knocking on the door now.
Interesting, are there any podcasts in particular that you recommend? Everything I’ve heard from it just seems like the most banal, cookie cutter stereotype of a podcast with nothing but extremely surface level summarization of a given article, peppered with random cliches and fake sounding reactions “Wow! ok, so let’s hear more about that. I’m intrigued!” “OK, let’s dive deep.” Etc.
Its trained on too many shallow podcasts. Go compare any of NotebookLM podcast with an episode of Hardcore History. The latter goes into much more depth (even when you account of it being much longer).
I think OP is presenting a different problem: while this tool gives the possibility of creating good quality podcasts, it also enables spammers to quickly generate tons of garbage episodes just to profit from the advertisement they put in it.
Goody, let’s just drive out all the human creators who actually interview real experts and go in depth on a subject, with AI-generated voices and summaries.
If AI ends up destroying humanity, it isn’t going to be through weapons and death robots, but just by entertaining and placating us all to death.
This doesn't strike me as much of a problem as it appears for you. What are the biggest issues you foresee?
I'm an avid podcast listener, but I already ignore 99.9% of podcasts out there. I'm not concerned that this is going to become 99.99%.
If these AI generated podcasts are all bad, I will just continue to ignore them. If some turn out to be good, it seems like a win to me.
If you're worried about an existential "what happens to the world if all media is machine generated", I guess I'm willing to hop on the ride and see what we find out.
99.9? There are roughly 3mm podcasts out there right now - I listen, regularly, to about 10 over a year (in any given week maybe 3-4). I'm therefore ignoring 2,999,990 or 99.9997% of podcast. I definitely agree with you that this isn't a problem.
(Also - ironically, one of the podcast out of those 10 that I listen to regularly - it's the Deep Dive on AI. A NotebookLM production! )
Assuming Google retain the current two voices for the audio overviews, it will rapidly become obvious to most people where the podcast came from.
I've seen "creators" on YouTube running NotebookLM-generated audio through (e.g.) ElevenLabs to change the voices, but this invariably degrades the quality.
Counterpoint: Most podcasts were utterly worthless before AI too. The world will do fine losing a few mattress ad vehicles.
Like other data, provenance suddenly matters a lot. From my POV, that's good. Not all data sources are created equal, and this is putting it into stark enough relief it might actually change the landscape. (In case it isn't obvious, I strongly believe most of the Internet was garbage well before LLMs. We just called it "SEO". Still garbage)
I generally agree, but when AI generated content is actively trying to avoid being labelled as “AI generated” it kinda gets depressing. Because in the end, it will just make the entire industry “seem” worthless, akin to AI generated pictures.
I’d rather let the end user know if it was made by humans or not, and let the marker decide. If people love listening to such content, let it be. But hiding how it was made, feels a bit disingenuous.
It sounds more like we should ban email and all email providers should consider the problem of email spam which traditional mail didn't have because no one could afford that many envelops and stamps.
Or like we should go back to carts because cars are noisy and not only that but might collide with pedestrians and not only that, might even collide among each other.
Instead of containing the tools and curtailing the progress (email and cars) we should probably try to contain and curtail abusers. Very hard to do, I know but the right thing to do.
Paper mail does have an issue with spam too. Though I guess the differences in price and scale make a quantitative issue into a qualitative one, which I guess was your point ?
You're assuming that everybody shares your opinion of cars being progress and/or progress being good - you're assuming too much.
Nearly every Google Image search result has AI images now, personally I’m starting to attribute this to Image Search just having been neutered and downgraded a ton over the past few years anyway so worse content surfaces in general.
But consider it from Googles perspective and this is why I think they won’t care, serving snippets and caches of articles had rights holders attacking them, serving thumbnails of images has rights holders attacking them, serving tiny bits of songs in the background of video had rights holders attacking them.
Serving AI doesn’t, I don’t think the current management at Google will care if Google shows fake baby peacocks as long as it can serve them without being bothered by rights holders, same way a Gemini summery can launder article information.
> it also opens the door for spammers to mass-produce content that isn't meant for human consumption.
What's new? Every novel class of genAI product has brought a tidal wave of slop, spam and/or scams to the medium it generates. If anyone working on a product like this doesn't anticipate it being used to mass produce vapid white-noise "content" on an industrial scale then they haven't been paying attention.
What I’m aiming for is to ensure that the NotebookLM team is aware of the impact and actively considering it. Hopefully, they are already working on tools or mechanisms to address the problem—ideally before their colleagues at YouTube and Google Search come asking for help to fight NotebookLM-generated spams :)
> Is there a watermark or any other identifiable marker that can be used?
The problem with this is it's not feasible long-term, or even medium-term - as soon as a watermarking system is implemented, a watermark-removal system will be created.
Who cares? This is a problem for podcast hosters like Spotify, but not for listeners. Listeners can just follow their usual podcasts and never see 99% of the stuff that is out there.
The comments' default remedy is tribal: "The only moral content is my content." We sort of used to live in that world under the studio and TV networks system. Most consumers would say, it was not so bad, maybe better even.
Of course, the commenter never says this, living in the world today, where the writing he likes would never be published by the New York Times like it is on Twitter, the TV he likes would never be offered for free like it is on YouTube, and the music he likes would never been offered for pennies on Spotify. Some meaningful creators will lose from every remedy you could think of, where Google "something somethings" AI. Maybe the root problem is generalizing.
Podcasts - episodic radio shows hosted on Apple Music and Spotify - haven't been around for very long. Not long enough to have kids being tutored in making podcasts and then becoming adults with that sentimental hobby, like with playing violin or oil painting. If you believe that the "Human Authenticity Badge" is meaningful for podcasts, it's complicated: traditions play the biggest role in the outrage you are trying to spin, not an appeal to slop and spam, which of course, there is already a ton of low quality podcasts, music and art written by real people for no nefarious purpose whatsoever. Like with many of these posts, which are really common on HN, there isn't a sensible remedy suggested besides pointing the fingers at some giant corporation, and asking them to do something impossible.
If you care a lot about podcast quality, go and make your own podcast service with better discovery. Once you realize the antagonist was collaborative filtering, made possible by non-negative matrix factorization dating from the year 2000, and not AI, you will at least have learned something from the comment, instead of just feeling better. And then, how do you propose to curate by hand, and why would someone choose your curation over the New Yorker's? And maybe those very purists, trying to make everything sentimental, accusing everyone of slop and spam - well, why do so many creators thrive and ignore the New Yorker's opinion about them entirely? Perhaps curation is not only not scalable, but also wrong. Difficult questions for listeners and podcast authors alike.
Well which one is it? Are the podcasts low quality or not? If they are, what the hell are you worried about? To be worried about, idk, disinformation from podcasts of all things is absolutely silliness. Won't someone think of the... podcast audiences? Fuckin what dude?
I was using this yesterday. I dumped all postmortems for an aspect of our infrastructure into a notebook and could then ask it to pull out common themes. It was remarkably effective. I also generated one of these "audio overviews" (aka podcasts) and it was great.
There was a vast improvement in quality from giving it a prompt when generating the overview. The generic un-prompted overview was for entirely the wrong audience, in our case users of our infrastructure rather than the developers. When instructing it to generate an overview for the SRE team and what they should focus on it was far better.
Was it useful for our in-depth analysis, no. Would I listen to one based on the last 100 postmortems for a new team I joined, absolutely. As an overview it was ideal, pulling out common themes from a lot of data and getting some of the vibe right too.
I am late to the Google's AI party but... My personal impression (might be wrong) is that Google's breadth and depth of AI tools is heavily underrated ranging from Notebook LLM to AI studio. Too good as far as I have tried.
Google of course is the birthplace of attention is all you need.
Moreso they're good at attracting talent. Note the attention is all you need paper came out in 2017 - was so hamstrung under Google, that Noam left Google to start Character AI. There he was reportedly close to a foundational model that would've crushed benchmarks. Until Google literally bought him back for $2.7B [1]
My product https://reasonote.com allows you to generate podcasts as well, and it's had this feature for a few weeks.
Improvements over NotebookLM:
(1) You can start with just a subject, and you don't need a full document to begin (though you can do that too![1])
(2) The podcast generates much faster
(3) The podcasts are interactive -- you can ask the hosts to change direction mid podcast, and they will do so.
(4) (Coming soon) You'll be able to make a Spotify-style Queue of Podcast topics, which you can add to as you encounter new ideas.
The primary tradeoff is that the voices / personalities are somewhat less engaging than NotebookLM at this time, though this will be dramatically improved over the coming months.
This is all in addition to the core value proposition, which is roughly "AI Generated Duolingo for Any Subject".
It's early days, but I'd love for you all to check it out and give me feedback :)
[1] Documents are currently heavily length-limited but this will be improved shortly
Have you tried doing what they do, where you generate the script, then run it through again and ask it to add the pauses and personality to the script?
That's coming soon, yes! I'm learning about how to emotionally prompt the different voice APIs right now. Eleven labs has some interesting writing on the subject in their prompting guide.
I'm also playing with other voice models -- built an awesome "voice actor simulator" with OpenAI Realtime Voice -- but it's expensive. Considering asking users to pass in an OpenAI API key for advanced voice? Or maybe just passing the per-token cost along to the user in their subscription.
Nice, I've only scratched the surface of Notebook LM, mainly for dumping lots of component reference material (datasheets, reference guides, application notes, etc). The text querying works great, but the audio overview wasn't very useful when it stuck to the high level of the content. With some ability to steer the topic out might be quite useful!
AI tooling has now made it too easy to find things.
On a web forum I am admin on, a user opened a DM a week ago titled "Google Notebook LM", someone else had shared a generated podcast thing that summarised the view of the forum on a particular subject, and it called out the usernames of someone who had strong opinions.
In response, another user ran with this and asked for a podcast to be generated summarising everything that was said by the user, their political views and all their hot takes.
Erm... uh-oh.
The use of real identity, the use of the same username across multiple sites, now makes it trivial for things like "take this Github username, find what sites the same username exists on, make a narrative of everything they've ever said, find the best and worst of what they've ever said"... which is terrifying.
I've said to the user the same old line we always repeat, "anything placed on the internet is effectively public forever", but only now are the consequences of this really being seen.
The forums I run allow username changes, encourage anonymity as much as possible, but we're at a point where multiple online identities, one for every site, interest, employer, etc... is probably the best way to go.
I notice on HN that there are many accounts that seem to register just to comment on particular stories and nothing more, and the comments are constructive and well thought out, and now I wonder whether some are just ahead of the curve on this — obscuring the totality of their identity from future employers, or anyone else who might use their words against them.
It feels like our lightweight choices in the past will start to have significant consequences in the present or future, and it's only a failure of imagination that is delaying a change in user behaviour.
The ability to do that exists, and was always going to eventually be easier. We used to all use pseudonyms but that fell out of fashion somewhat; and even then over time its inevitable you'll say one or a few things that can deanonymyze you. This was always going to happen and we can only hope it will change the public perception about privacy, which to this point has often been indifference or even annoyance when one brings it up.
> I notice on HN that there are many accounts that seem to register just to comment on particular stories and nothing more, and the comments are constructive and well thought out, and now I wonder whether some are just ahead of the curve on this — obscuring the totality of their identity from future employers, or anyone else who might use their words against them.
Throw aways are very common here for that purpose! On my end I'm becoming more interested in how to safeguard users -- anonymize them -- and also how to make easy to _generate_ throwaways without opening the door to spam (e.g. generate from a valid account, but then detach it). HN likely gets around this by being niche; I think the somewhat unattractive site design helps there.
Yeah the light cone of online activity seems to only grow with little diminishing, which seems unnatural and counter to the type of environment we evolved for. GDPR and the right to be forgotten seemed funny in my youth, now I see it as wisdom ahead of its time.
https://github.com/ListenNotes/ai-generated-fake-podcasts/bl...
Google is taking a different approach this time, moving quickly. While NotebookLM is indeed a remarkable tool for personal productivity and learning, it also opens the door for spammers to mass-produce content that isn't meant for human consumption.
Amidst all the praise for this project, I’d like to offer a different perspective. I hope the NotebookLM team sees this and recognizes the seriousness of the spam issue, which will only grow if left unaddressed. If you know someone on the team, please bring this to their attention - Could you please provide a tool or some plain-English guidelines to help detect audio generated by NotebookLM? Is there a watermark or any other identifiable marker that can be used?
Just recently, a Hacker News post highlighted how nearly all Google image results for "baby peacock" are AI-generated: https://news.ycombinator.com/item?id=41767648
It won't be long before we see a similar trend with low-quality, AI-generated fake podcasts flooding the internet.
Maybe you have the problem backwards - we accidentally end up listening to non NotebookLM podcasts?
It was factually accurate, and presented the topic in a manner that was easy to digest and kept it interesting.
I didn't plan to but ended up listening to the whole thing, and I normally don't enjoy the podcast format.
For someone new to the topic, it'd be a pretty great intro compared to reading the official pages.
Its trained on too many shallow podcasts. Go compare any of NotebookLM podcast with an episode of Hardcore History. The latter goes into much more depth (even when you account of it being much longer).
If AI ends up destroying humanity, it isn’t going to be through weapons and death robots, but just by entertaining and placating us all to death.
I'm an avid podcast listener, but I already ignore 99.9% of podcasts out there. I'm not concerned that this is going to become 99.99%.
If these AI generated podcasts are all bad, I will just continue to ignore them. If some turn out to be good, it seems like a win to me.
If you're worried about an existential "what happens to the world if all media is machine generated", I guess I'm willing to hop on the ride and see what we find out.
(Also - ironically, one of the podcast out of those 10 that I listen to regularly - it's the Deep Dive on AI. A NotebookLM production! )
NotebookLM seems wonderful for digesting various content in an alternative way. It’s not a “fake podcast” either.
Nobody is saying that the audio output should or should not be published somewhere. That’s a user decision for both publishing and subscribing.
Indexes and discovery on the internet is where you advocate policing instead of nit picking a useful tool.
Like other data, provenance suddenly matters a lot. From my POV, that's good. Not all data sources are created equal, and this is putting it into stark enough relief it might actually change the landscape. (In case it isn't obvious, I strongly believe most of the Internet was garbage well before LLMs. We just called it "SEO". Still garbage)
Yet more "but humans also".
I’d rather let the end user know if it was made by humans or not, and let the marker decide. If people love listening to such content, let it be. But hiding how it was made, feels a bit disingenuous.
Or like we should go back to carts because cars are noisy and not only that but might collide with pedestrians and not only that, might even collide among each other.
Instead of containing the tools and curtailing the progress (email and cars) we should probably try to contain and curtail abusers. Very hard to do, I know but the right thing to do.
You're assuming that everybody shares your opinion of cars being progress and/or progress being good - you're assuming too much.
But consider it from Googles perspective and this is why I think they won’t care, serving snippets and caches of articles had rights holders attacking them, serving thumbnails of images has rights holders attacking them, serving tiny bits of songs in the background of video had rights holders attacking them.
Serving AI doesn’t, I don’t think the current management at Google will care if Google shows fake baby peacocks as long as it can serve them without being bothered by rights holders, same way a Gemini summery can launder article information.
What's new? Every novel class of genAI product has brought a tidal wave of slop, spam and/or scams to the medium it generates. If anyone working on a product like this doesn't anticipate it being used to mass produce vapid white-noise "content" on an industrial scale then they haven't been paying attention.
What I’m aiming for is to ensure that the NotebookLM team is aware of the impact and actively considering it. Hopefully, they are already working on tools or mechanisms to address the problem—ideally before their colleagues at YouTube and Google Search come asking for help to fight NotebookLM-generated spams :)
It's certainly easier for the creators of genAI to build detection tools than for outsiders to do so. AI audio detection is a hard problem - https://www.npr.org/2024/04/05/1241446778/deepfake-audio-det...
The problem with this is it's not feasible long-term, or even medium-term - as soon as a watermarking system is implemented, a watermark-removal system will be created.
(Happy to be proven wrong)
Unfortunately Kagi image search is polluted with the same crap. I'd started to trust it recently but not so sure now.
[Edit] This is true even when you specifically use the 'exclude AI' filter
Of course, the commenter never says this, living in the world today, where the writing he likes would never be published by the New York Times like it is on Twitter, the TV he likes would never be offered for free like it is on YouTube, and the music he likes would never been offered for pennies on Spotify. Some meaningful creators will lose from every remedy you could think of, where Google "something somethings" AI. Maybe the root problem is generalizing.
The 1,300+ shows are just the ones recently removed from Listen Notes.
Give it a few days, and I’m sure the number will double, quadruple, and continue to grow. :(
If you care a lot about podcast quality, go and make your own podcast service with better discovery. Once you realize the antagonist was collaborative filtering, made possible by non-negative matrix factorization dating from the year 2000, and not AI, you will at least have learned something from the comment, instead of just feeling better. And then, how do you propose to curate by hand, and why would someone choose your curation over the New Yorker's? And maybe those very purists, trying to make everything sentimental, accusing everyone of slop and spam - well, why do so many creators thrive and ignore the New Yorker's opinion about them entirely? Perhaps curation is not only not scalable, but also wrong. Difficult questions for listeners and podcast authors alike.
Deleted Comment
There was a vast improvement in quality from giving it a prompt when generating the overview. The generic un-prompted overview was for entirely the wrong audience, in our case users of our infrastructure rather than the developers. When instructing it to generate an overview for the SRE team and what they should focus on it was far better.
Was it useful for our in-depth analysis, no. Would I listen to one based on the last 100 postmortems for a new team I joined, absolutely. As an overview it was ideal, pulling out common themes from a lot of data and getting some of the vibe right too.
Google of course is the birthplace of attention is all you need.
[1] https://www.wsj.com/tech/ai/noam-shazeer-google-ai-deal-d360...
Improvements over NotebookLM:
(1) You can start with just a subject, and you don't need a full document to begin (though you can do that too![1])
(2) The podcast generates much faster
(3) The podcasts are interactive -- you can ask the hosts to change direction mid podcast, and they will do so.
(4) (Coming soon) You'll be able to make a Spotify-style Queue of Podcast topics, which you can add to as you encounter new ideas.
The primary tradeoff is that the voices / personalities are somewhat less engaging than NotebookLM at this time, though this will be dramatically improved over the coming months.
This is all in addition to the core value proposition, which is roughly "AI Generated Duolingo for Any Subject".
It's early days, but I'd love for you all to check it out and give me feedback :)
[1] Documents are currently heavily length-limited but this will be improved shortly
https://www.reasonote.com/app/upgrade
I'm also playing with other voice models -- built an awesome "voice actor simulator" with OpenAI Realtime Voice -- but it's expensive. Considering asking users to pass in an OpenAI API key for advanced voice? Or maybe just passing the per-token cost along to the user in their subscription.
Dead Comment
audience=technical, duration=long, tone=professional & engaging
On a web forum I am admin on, a user opened a DM a week ago titled "Google Notebook LM", someone else had shared a generated podcast thing that summarised the view of the forum on a particular subject, and it called out the usernames of someone who had strong opinions.
In response, another user ran with this and asked for a podcast to be generated summarising everything that was said by the user, their political views and all their hot takes.
Erm... uh-oh.
The use of real identity, the use of the same username across multiple sites, now makes it trivial for things like "take this Github username, find what sites the same username exists on, make a narrative of everything they've ever said, find the best and worst of what they've ever said"... which is terrifying.
I've said to the user the same old line we always repeat, "anything placed on the internet is effectively public forever", but only now are the consequences of this really being seen.
The forums I run allow username changes, encourage anonymity as much as possible, but we're at a point where multiple online identities, one for every site, interest, employer, etc... is probably the best way to go.
I notice on HN that there are many accounts that seem to register just to comment on particular stories and nothing more, and the comments are constructive and well thought out, and now I wonder whether some are just ahead of the curve on this — obscuring the totality of their identity from future employers, or anyone else who might use their words against them.
It feels like our lightweight choices in the past will start to have significant consequences in the present or future, and it's only a failure of imagination that is delaying a change in user behaviour.
> I notice on HN that there are many accounts that seem to register just to comment on particular stories and nothing more, and the comments are constructive and well thought out, and now I wonder whether some are just ahead of the curve on this — obscuring the totality of their identity from future employers, or anyone else who might use their words against them.
Throw aways are very common here for that purpose! On my end I'm becoming more interested in how to safeguard users -- anonymize them -- and also how to make easy to _generate_ throwaways without opening the door to spam (e.g. generate from a valid account, but then detach it). HN likely gets around this by being niche; I think the somewhat unattractive site design helps there.
Yeah the light cone of online activity seems to only grow with little diminishing, which seems unnatural and counter to the type of environment we evolved for. GDPR and the right to be forgotten seemed funny in my youth, now I see it as wisdom ahead of its time.
I'm still getting the tooling right so that the videos will get made in a better and more consistent schedule.