The more important property is that, unlike compilers, type checkers, linters, verifiers and tests, the output is unreliable. It comes with no guarantees.
One could be pedantic and argue that bugs affect all of the above. Or that cosmic rays make everything unreliable. Or that people are non deterministic. All true, but the rate of failure, measured in orders of magnitude, is vastly different.
LLMs don't do this. Instead, every question is immediately responded to with extreme confidence with a paragraph or more of text. I know you can minimize this by configuring the settings on your account, but to me it just highlights how it's not operating in a way remotely similar to the human-human one I mentioned above. I constantly find myself saying, "No, I meant [concept] in this way, not that way," and then getting annoyed at the robot because it's masquerading as a human.
Could a life radically and willfully different in many ways turn out to be better for most of us (which is critically what you claimed before)? It's certainly possible, given how few people take this route, but an appeal to nature is just not super convincing, unless you can back it up with data.
I can't help but notice you did not engage with how 40% of kids dieing and another 20% of us getting killed by some member of the cherished tribe could possible lead to high levels of life satisfaction. As far I can tell, on the whole, the good old days were cruel and rosy retrospection is just that.
I think if the theory goes that from a evolutionary standpoint we psychologically are still better equipped to be hunter gatherers, we should assume that our feelings towards homicide and child mortality are comparable. So how happy can a people be, when 40% of their children die and another 20% die by homicide?
If we follow that thread I would argue that it's very unlikely that people were happier back when or would be happier today, unless some other component of being hunter gatherers makes us fantastically ecstatic.