This was obviously going to happen, though the amount of pushback I received for the exact observation was less obvious.
The main argument was hallucinations, but I have corrected enough people in customer service to know that the humans are wrong more than the ChatGPT(even 3.5)
Even a mediocre chatbot with some in-domain training data is fine for their purpose, which is not to maximally help the customer.
I have thought about it lately, though. Maybe having that responsibility will cause other things to click into place? However, I’m worried about it not working and impacting a life beyond my own.