If ChatGPT fabricates a link or makes something up, there's a potential feedback mechanism. E.g. I ask ChatGPT to explain a science term, and then I get told that my understanding is incorrect in class when I use the ChatGPT definition.
It doesn't exist for every use case, but I'm hopeful everyone will be "bitten" once by ChatGPT etc., and then folks will verify its output appropriately.
I suspect our intuition tells us code will be verified first by a programmer, then by the language parser, whereas data is likely to be manipulated without human scrutiny.
Does anyone seriously believe that this will forever be the case?
This was a long time ago, but I bought a Dell with Ubuntu while visiting my parents in Oregon, and took it back with me to where I was living in Innsbruck, Austria.
The hard drive (I told you this was a while back) died on me. I called up Dell, and expected some nightmare of having to ship it back to the US where I'd bought it and waiting months and who knows what.
But what actually happened was some guy showed up at my door the next day with a new drive and swapped it out.
I was so impressed.