if I just need a basic fact or specific detail from an article, and being wrong has no real world consequences, I'll probably just gamble it and take the AI's word for it most of the time. Otherwise I'm going to double check with an article/credible source
if anything, I think aimode from google has made it easier to find direct sources for what I need. A lot of the times, I am using AI for "tip of the tongue" type searches. I'll list a lot of information related to what I am trying to find, and the aimode does a great job of hunting it down for me
ultimately though, I do think some old aspects of google search are dying - some good, some bad.
Pros: don't fee the need to sift through blog spam, I don't need to scroll past paid search results, I can avoid the BS part of an article where someone goes through their entire life story before the actual content (I'm talking things like cooking website)
Cons: Google is definitely going to add ads to this tool at some point, some indie creators on the internet will have a harder time getting their name out.
my key takeaway from all this is that people will only stop at your site if they think your site will have something to offer that the AI can't offer. and this isn't new. people have been steeling blog content and turning into videos for ever. people will steel paid tutorials and release the content for free on a personal site. people will basically take content from site-X and repost in a more consumable format on site-Y. and this kind of theft is so obvious and no one liked seeing the same thing reposted a 1000 times. I think this long term is a win
quanta published an article that talked about a physics lab asking chatGPT to help come up with a way to perform an experiment, and chatGPT _magically_ came up with an answer worth pursuing. but what actually happened was chatGPT was referencing papers that basically went unread from lesser famous labs/researchers
this is amazing that chatGPT can do something like that, but `referencing data` != `deriving theorems` and the person posting this shouldn't just claim "chatGPT derived a better bound" in a proof, and should first do a really thorough check if it's possible this information could've just ended up in the training data