I asked about a specific Dutch book, ChatGPT was wrong about the author (it was another author born a century later). I corrected it but got told that the two authors were the same and it was a pseudonym.
I ask the birthdate of the correct author. It gave me relatively correct answer with date of birth and death.
I then asked about the birthdate of the wrong author. It told me again a, relatively correct answer, indeed he was born long after the other author died.
I asked ChatGPT how it could be that the dates differed. It told me that it is very usual for an author to go by a pseudonym.
I told it it was wrong. They are different authors living in different centuries . But it stubbornly refused to accept it, teaching me again that it is perfectly common for authors to go by two different names.
edit: Just to add when asked for a description of the book it gave me a very believable summary, which was total nonsense. This is what really disturbed me about ChatGPT. Though I am very impressed by the fact that we now have a system that is very good at parsing human language. Something which was long thought to be impossible. Combining that strength with an, actual, datasource would be the only way forward in my opinion.
Then I asked it "Could you adapt the function so that it works on Venus, where years have 224 days?"
It offered me a new version of the function, which simply checks if the year is a multiple of 224. Apparently on Venus the number of days in a year and the frequency of leap years are the same number. It qualified the answer: "It's worth noting that this function is based on current knowledge and understanding of Venus..."
I asked it "What if we want the function to use Venus days as well as Venus years?"
It offered me the same function, except that a) the variable 'years' was now called 'days', and b) the modulus was changed from 224 to 224.701.
So I asked "Should the argument to the last function be a float or an integer?"
It gave me 3 pars of complete nonsense about how the difference between floats and integers affects the precision of calculating leap years (while again warning that the exact value of the Venus year might change).
ChatGPT does a very good imitation of a certain type of candidate I've occasionally interviewed, who knows almost nothing but is keen to try various tricks to bluff you out, including confidently being wrong, providing lots of meaningless explanation, and sometimes telling you that you are wrong about something. I have never hired anyone like this, but I've occasionally come close.
I have been trying various interview questions on ChatGPT, originally because my colleagues warned me that a candidate who was surreptitiously using it could ace almost any interview. I was skeptical and I have not been convinced.
But I think it's actually a great exercise to practice interviewing on it. If ChatGPT can answer your questions accurately (try to be fair and ignore its slightly uncanny tone), then you probably need better questions. If you are quite technical and put some thought into it, you should be able to come up with things which are both novel enough and hard enough that ChatGPT will simply flounder catastrophically. (I'm not referring to 'tricks' like the Venus question, but real questions on how to achieve something moderately complicated using code.) It's a really good reminder too that when we ask candidates to write code, we should examine and debug it in detail, then ask decent follow-up questions, rather than just accepting something that looks right.
However this is most of the time a total waste of time, because the reason the person is arguing against the obvious is not because of intelligent caution, but rather out of scare.
So sorry if i'm not going this route with you. If you're sincerely curious i suggest you go on a bike ride in Argenteuil on a friday.
What are the words you are scared of using if you answered this question?
Why won't you use them?