That said, I have a strong desire to learn math, and I have a copy of Understanding Analysis waiting for me to pick up. I think I'd like to learn Analysis, Linear Algebra, Probability, Graphs... Book suggestions are welcome by the way.
It also depends on how many things you are interested in. If you care about, say, a complex object with many properties, then the interactivity of a debugger trumps logging.
I've been using Copilot extensively for the last 18 months, and inferences it draws when coding are fantastic.
So I fired up my old OpenAI account and ChatGPT seems to quite horrible.
0/3 on 3 prompts so far..
Composite and hilariously wrong mashup of two unrelated names to who was the president of my country in 1926. (Unlike King of France in 1889 it had a correct answer).
Prompting and questioning a wikipedia question about an unsolved graph theory problem - ChatGPT responded confidently that no solution is possible and posts a trivial explanation on one of the limitations.
Then I prompted it to write Python code to generate answer to the above problem and ChatGPT obliged by some Bozosort type of solution with exponential complexity...
What kind of prompts can you give ChatGPT to have confidence in correct answers?
There you can find the prompt that allowed ChatGPT to provide a working solution. It is a bit hit and miss, but you also gotta make sure any assumptions are explicitly noted in the prompt.