You can’t make a rule that says “hey don’t break the rules”.
That seems logically fallacious.
100% line coverage though!
To hire a new employee at market rate, you have to give your team of lets say 10 employees that much money, which is euqivalent to hiring another 2 people for 'nothing'.
E.g.: the default hosting model might be to have all of the services in a single process with pass-by-copy messages. One could even have multiple instances of a service pinned to CPU cores, with hash-based load balancing so that L2 and L3 caches could be efficiently utilised.
The “next tier” could be a multi-process host with shared memory. E.g.: there could be permanent “queue” and “cache” services coupled to ephemeral Web and API services. That way, each “app” could be independently deployed and restarts wouldn’t blow away terabytes of built up cache / state. One could even have different programming languages!
Last but not least, scale out clusters ought to use RDMA instead of horrifically inefficient JSON-over-HTTPS.
Ideally, the exact same code ought to scale to all three hosting paradigms without a rewrite (but perhaps a recompile).
Some platforms almost-but-not-quite work this way, such as EJB hosts — they can short circuit networking for local calls. However they’re not truly polyglot as they don’t support non-JVM languages. Similarly Service Fabric has some local-host optimisations but they’re special cases. Kubernetes is polyglot but doesn’t use shared memory and has no single-process mode.
[0]: https://aeroncookbook.com/aeron/overview/ [1]: https://aeroncookbook.com/simple-binary-encoding/overview/
The principle that tests that couple to low level code give you feedback about tightly coupled code is true but it does that because low level/unit tests couple too tightly to your code - I.e. because they too are bad code!
Have you ever refactored working code into working code and had a slew of tests fail anyway? That's the child of test driven design.
High level/integration TDD doesnt give "feedback" on your design it just tells you if your code matches the spec. This is actually more useful. It then lets you refactor bad code with a safety harness and give failures that actually mean failure and not "changed code".
I keep wishing for the idea of test driven design to die. Writing tests which break on working code is inordinately uneconomic way to detect design issues as compared to developing an eye for it and fixing it under a test harness with no opinion on your design.
So, yes this - high level test driven development - is TDD and moreover it's got a better cost/benefit trade off than test driven design.
I don’t know what the solution is.
I disagree with the severity of this, and would posit that there are duplications that can't be "fixed" by an abstraction.
There are many instances I've encountered where two pieces of code coincided to look similar at a certain point in time. As the codebase evolved, so did the two pieces of code, their usage and their dependencies, until the similarity was almost gone. An early abstraction that would've grouped those coincidentally similar pieces of code would then have to stretch to cover both evolutions.
A "wrong abstraction" in that case isn't an ill-fitting abstraction where a better one was available, it's any (even the best possible) abstraction in a situation that has no fitting generalization, at all.