Gp1-o1 preview solves this puzzle correctly in 13 seconds and has a thorough logical deduction in the comments and explanation.
I think it’s a bit unfair on llm to ask it to retrieve the puzzle definition from its training data. I posted the info on the puzzle from his notebook.
The question is if it solved the puzzle correctly before Norvig's article appeared. It could have been trained (I am told that existing models can be modified and augmented in any Llama discussion) on the article or on HN comments.
There could even be an added routine that special cases trick questions and high profile criticisms.
I think it’s a bit unfair on llm to ask it to retrieve the puzzle definition from its training data. I posted the info on the puzzle from his notebook.
https://chatgpt.com/share/670103ae-1c18-8011-8068-dd21793727...
There could even be an added routine that special cases trick questions and high profile criticisms.