When you decide to make up your own definition of determinism, you can win any argument. Good job.
> This is why all of those "national great firewalls" shouldn't exist in the first place
This is a kind of colonialist thinking that is, IMO, a problem in the western society. There are indeed drawbacks in a lack of freedom, but assuming that a government should not be able to filter the content diffused to the population is wrong in principle. You don't get to choose what is right or wrong in every part of the world: that is a very USA-centric way to view the society and easily leads to "export freedom and democracy" acts. It's a very USA-friendly way to frame things. Not necessarily the right way to frame things.
Why?
Not coding related but my wife is certainly better than most and yet I’ve had to reprompt certain questions she’s asked ChatGPT because she gave it inadequate context. People are awful at that. Us coders are probably better off than most but just as with human communication if you’re not explaining things correctly you’re going to get garbage back.
In a way, LLMs are heavily exploitative of human linguistic abilities and expectations. We're wired so hard to actively engage and seek meaning in conversational exchanges that we tend to "helpfully" supply that meaning even when it's absent. We are "vulnerable" to LLMs because they supply all the "I'm talking to a person" linguistic cues, but without any form of underlying mind.
Folks like your wife aren't necessarily "bad" at LLM prompting—they're simply responding to the signals they get. The LLM "seems smart." It seems like it "knows" things, so many folks engage with them naturally, as they would with another person, without painstakingly feeding in context and precisely defining all the edges. If anything, it speaks to just how good LLMs are at being LLMs.
It's the same programming with LLMs. Through experience, you build up intuition and rules of thumb that allow you to get good results, even if you don't get exactly the same result every time.
Friend, you have literally described a nondeterministic system. LLM output is nondeterministic. Identical input conditions result in variable output conditions. Even if those variable output conditions cluster around similar ideas or methods, they are not identical.
Now, I do agree a non-standard port is not a security tool, but it doesn't hurt running a random high-number port.
One less setup step in the runbook, one less thing to remember. But I agree, it doesn't hurt! It just doesn't really help, either.
Enlightenment here comes when you realize others are doing the exact same thing with the exact same justification, and everyone's pain/reward threshold is different. The argument you are making justifies their usage as well as yours.
Not yet, but you're absolutely right. Once a tool like this stops being front of mind, it'll fall right out of my head. It's a bit like driving somewhere versus being driven—I'm a lot more likely to remember how to get to a place if I have to actively navigate to it. If I'm in the passenger seat, all bets are off!