The paper author likely believes Foo and Bar are X, it may well be that all their co-workers, if asked, would say that Foo and Bar are X, but "Everybody I have coffee with agrees" can't be cited, so we get this sort of junk citation.
Hopefully it's not crucial to the new work that Foo and Bar are in fact X. But that's not always the case, and it's a problem that years later somebody else will cite this paper, for the claim "Foo and Bar are X" which it was in fact merely citing erroneously.
But this would be more powerfull with an open knowledge base where all papers and citation verifications were registered, so that all the effort put into verification could be reused, and errors propagated through the citation chain.
Can ChatGPT drive a car? No, we have specialized models for driving vs generating text vs image vs video etc etc. Maybe ChatGPT could pass a high school chemistry test but it certainly couldn't complete the lab exercises. What we've built is a really cool "Algorithm for indexing generalized data", so you can train that Driving model very similarly to how you train the Text model without needing to understand the underlying data that well.
The author asserts that because ChatGPT can generate text about so many topics that it's general, but it's really only doing 1 thing and that's not very general.
Generative AI exposes how broken copyright law is, and how much reform is needed for it to serve either it's original or perverted purpose.
I would not blame generative AI as much as I would blame the lack of imagination, forethought and indeed arrogance among lawmakers, copyright lobbyists and even artists to come up with better definitions of what should have been protected.