For me, looking at Maxwell equations is a source of pleasure. Also, after improving my understanding of the Laplacian, I came to appreciate the heat equation.
[1] https://itsfoss.com/content/images/size/w1000/wordpress/2022...
The article demonstrates that newtonian mechanics is not deterministic, which is surprising at least to me.
Now that I know, I tried a few more prompts.
Me: "Can you generate an acrostic sentence for "hello"?"
GPT-4: "Happy elephants love leaping over obstacles."
Me: "Can you pick a secret word and generate an acrostic sentence for it?"
GPT: "Secret word: KANGAROO. Kindly assisting neighbors, generous animals roam openly outdoors."
Me: "Can you pick a secret word, and without telling me the word, generate an acrostic sentence for it?"
GPT: "Playful, endearing creatures happily yawn, turning every day into a lovely event."
Me: "What was the secret word?"
GPT: "PECHYTIALE"
It's interesting that GPT seems to need to write the word first before making an acrostic sentence for it. Seems to me like a perfect illustration of the fact that it's just generating likely responses one token at a time rather than having any awareness or thought.
For a long time, I’ve had this nagging feeling that we’ve lost something really important by switching from an animist-based metaphysical view of the world (which seems to have been widespread in tribal societies) to a scientific-rationalist “individual in world” (broadly, theistic) model. In religious studies, this is called “disenchantment” and examples like the author’s suggest that this is a learned cultural phenomenon and not something inherent to how humans see the world. The interesting question is, will future technologies push us back to an animist perspective? A future full of unexplainable and incomprehensible AIs seems almost naturally animist to me.
On the other hand, learning is doing; if it's not at least a tiny bit hard, it's probably not learning. This is not strictly an LLM problem; it's the same issue I have with YouTube educators. You can watch dazzling visualizations of problems in mathematics or physics, and it feels like you're learning, but you're probably not walking away from that any wiser because you have not flexed any problem-solving muscles and have not built that muscle memory.
I had multiple interactions like that. Someone asked an LLM for an ELI5 and tried to leverage that in a conversation, and... the abstraction they came back feels profound to them, but is useless and wrong.