But every time I suggest to a team I’m working on that we should try modelling the problem with it… people are less than enthusiastic.
These are all great tips. Especially starting with the most simple, abstract model and extending it with refinement if you need to.
That basically means that you write your first spec as abstractly and simply as possible with as little detail about the actual implementation as you can.
If details matter, you use the first spec as a module and write a new spec with the added details. The new spec will use implication that if this spec holds then the first one holds too.
Anyway, I was kind of mad when I learned TLA+ that it and systems like it aren’t part of the standard curriculum and common practices.
“We built this distributed database that guarantees quorum if you have at least 3 nodes!” someone will excitedly claim. And when you ask how you get an informal paper and 100k lines of C++ code. Wouldn’t it be nicer if you just had 1k lines of plain old pure maths?
And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.
For me it’s nicer I guess.
Now is a great time to dive in, though. LLMs take a lot of the syntactical pain out of the learning experience. Hallucinations are annoying, but you can formally prove they're wrong with the model checker ^_^
I think it's going to be a learn these tools or fall behind thing in the age of AI.
Consider my behavior changed. I thought the "high school math" was an encouraging way to sell it (i.e. "if you can get past the syntax and new way of thinking, the 'math' is ultimately straight forward"), but I can see your point, and how the perception would be poor when they hit that initial wall.