Interesting bug: when pressing the text-to-speech button to hear my Swedish story read out loud, what I get is a pretty good rendition of the text read as if it were French.
The HoTT people are doing good work and I don't doubt that automation can help with the _significant_ additional complexity of higher dimensional cubes, but that's not really all that would be missing.
Finally, if you can't claim Lean is good enough for all Mathematics then you can't claim it for any existing system or any system that doesn't take classical logic seriously (postulating an axiom doesn't count).
Correct me if I'm wrong (I may well be), but couldn't one work classically just by sticking to (-1)-truncated types ("mere prepositions") in a HOTT based system, for which LEM is true by default?
It's easy to get a superficial understanding of the problem, and then very easy to subtly mis-understand it.
I've reviewed published papers by respectable people where they've made one of several easy mistakes:
* Be careful about how you encode your problem, it's easy to make it "too large", at which point your problem can be solved in P in the input size. For example, don't represent a sudoku as triples "x-pos,y-pos,value", where those 3 numbers are encoded in binary, because now if I give you a problem with only 2 filled in values, you can't take the solution as your 'certificate', as your input is about size 6 * log(n), but the solution will be n * n * log(n).
* It's also easy if you write your problem as a little language to accidentally make it impossible to check in P time.
* When proving a reduction (say turning SAT into a Sudoku, to prove Sudoku is NP-complete), it's (usually) easy to show how solutions map to solutions (so you show how the SAT instance's solution turns into a particular solution to the Sudoku). It's usually much harder, and easy to make mistakes, showing the Sudoku can't possible have any other solutions.