0) Cache invalidation
1) Naming things
5) Asynchronous callbacks
2) Off-by-one errors
3) Scope creep
6) Bounds checking
Computer “science” has a difficult conceptual problem with caching. The optimal cache, this science tells us, is indistinguishable from a fortune teller who is never wrong (oracle). Fortune telling is a “hard” problem for a science based on reasoning. The best we can do is hedge bets (which is what the science of caching focuses on).
This same science also has a difficulty with naming things. Now numbering things is easy and science loves maths and maths love sciences, but science and letters have a more difficult history. Science would approve of “factoryfactoryfactoryImpl” btw ... it’s “a rational scheme of naming”. .
Here we see a “science” that is facing actual difficulties.
The rest of your list are difficult but not “hard”. The science of these matters is clear and the rest is up to the “scientists” struggling with “scope creep” and “bounds checking” ..
The reasons why append-only structures were chosen in general apply surprisingly often to any particular system that scales to large data, which would like to be robust to a wide variety of real world problems. You won't see it in looking at the data structure because the hard parts are abstracted away from you. But you'll see it if you try to reimplement the same system from scratch, then scale it up to production.
Both points need to be kept in mind imho in design.