Maybe not you in particular, but I expect people to be more forthcoming in their writing towards LLMs vs a raw google search.
For example, a search of "nice places to live in" vs "I'm considering moving from my current country because I think I'm being politically harassed and I want to find nice places to live that align with my ideology of X, Y, Z".
I do agree that, after collecting enough search datapoints, one could piece together the second sentence from the first, and that this is more akin to a new instance of an already existing issue.
It's just that, by default I expect more information to be obtainable, more easily, from what people write to an LLM vs a search box.
If we did this (to a good enough level of detail), would it be able to derive relativity? How large of an AI model would it have to be to successfully derive relativity (if it only had access to everything published up to 1904)?