Instead, this happened:
https://chatgpt.com/share/682cce62-c53c-8003-be2c-2929395868...
Basically, the model confidently outputs a guess, then calculates it, determines it to be incorrect, and repeatedly tries again, even repeating the same guesses over and over. It does not recognize any symmetry and acts like a completely unstructured agent. In the end, the model vehemently asserts there to be no solutions to this puzzle. I really did not expect this and will update my beliefs accordingly if the models behave as badly with future puzzles.
I also asked Chat GPT o3 and it thought for 11.5 minutes! https://chatgpt.com/share/682d0993-db4c-8004-a66c-3908ef7203...
Packages were supposed to replace programming. They got you 70% of the way there as well.
Same with 4GLs, Visual Coding, CASE tools, even Rails and the rest of the opinionated web tools.
Every generation has to learn “There is no silver bullet”.
Even though Fred Brooks explained why in 1986. There are essential tasks and there are accidental tasks. The tools really only help with the accidental tasks.
AI is a fabulous tool that is way more flexible than previous attempts because I can just talk to it in English and it covers every accidental issue you can imagine. But it can’t do the essential work of complexity management for the same reason it can’t prove an unproven maths problem.
As it stands we still need human brains to do those things.