Because the computer is fundamentally knowable. Somebody defined what a "close game" ahead of time. Somebody defined what a "reasonable stretch" is ahead of time.
The minute it's solidified in an algorithm, the second there's an objective rule for it, it's no longer dynamic.
The beauty of the "human element" is that the person has to make that decision in a stressful situation. They will not have to contextualize it within all of their other decisions, they don't have to formulate an algebra. They just have to make a decision they believe people can live with. And then they will have to live with the consequences.
It creates conflict. You can't have a conflict with the machine. It's just there, following rules. It would be like having a conflict with the beurocrats at the DMV, there's no point. They didn't make a decision, they just execute on the rules as written.
I asked it to analyze a recent painting I made and found the response uninspired. Although at least the feedback that it provided was notably distinct from what I could get from the US models, which tends to be pretty same-y when I ask them to analyze and critique my paintings.
Another subjective test, I asked it to generate the lyrics for a song based on a specific topic, and none of the options that it gave me were any good.
Finally, I tried describing some design ideas for a website and it gave me something that looks basically the same as what other models have given me. If you get into a sufficiently niche design space all the models seem to output things that pretty much all look the same.
It works well as a narrative, but the second I started adding things like tracking high level macro effects of the decisions, within a couple of turns the world's "Turmoil" goes from 4/10 to a 10/10... even when the person that was killed would have been killed IRL.
Sonnet 4, o4-mini, and GPT 4o-mini all had the same world ending outcomes not matter who you kill. Killing Hitler in 1930s: 10/10 turmoil, Killing Lincoln in the 1850s: 10/10 turmoil in the first turn.
I've come to the realization, the LLM shouldn't be used for the logic, and instead needs to be used to just narrate the choices you make.
This exactly right. LLMs are awesome for user<>machine communication, but are still painful to try to use as a replacement for the machine itself.
Datastar considers its library v1.0 release [1] to be complete, offering a core hypermedia API, while all else consists of optional plugins. The devs have a hot take wrt htmx in their release announcement:
> While it’s going to be a very hot take, I think there is zero reason to use htmx going forward. We are smaller, faster and more future proof. In my opinion htmx is now a deprecated approach but Datastar would not exist but for the work of Carson and the surrounding team.
When you think of adopting htmx, it may be worth making a comparison to Datastar as well.