Deleted Comment
Figure 26 appears to start with "we need to predict the output", and follow with code, input, and output. Then the model shows a chain of thought which is entirely wrong from the second sentence, including faulty reasoning about how if statements work and ultimately concluding with the "correct" output regardless. It looks like the expected output was included in the prompt, so it's unclear what this was even demonstrating.
Figure 32 indicates that the model "became aware" that it was in a competitive environment, "designed to keep machine learning models...guessing". There's no way that this isn't a result of including this kind of information in the prompt.
Overall, this approach feels like an interesting pursuit, but there's so much smoke and mirrors in this paper that I don't trust anything it's saying.
The developer of Balatro made an award winning deck builder game by not being aware of existing deck builders.
I'm beginning to think that the best way to approach a problem is by either not being aware of or disregarding most of the similar efforts that came before. This makes me kind of sad, because the current world is so interconnected, that we rarely see such novelty with their tendency to "fall in the rut of thought" of those that came before. The internet is great, but it also homogenizes the world of thought, and that kind of sucks.