Almost all of the listed "antipatterns" are things that are bad by definition. "Don't do the thing too much or too little, do the right amount" is not a useful recommendation.
You might think of it like a checklist of possible problems, rather than a list of rules to follow blindly. It's true that it won't tell you which problems you have, but they may be useful hints, things to consider.
Knowing what problems you actually have usually requires human judgement after seeing the situation.
My favourite system to work on is one that I understand.
And everyone refactors to their own understanding and intuition.
And my intuition or understanding might not be identical or as advanced or as simple or insightful as yours. (EDIT: Your understanding that things are SIMPLE might be more advanced than mine, so I don't really understand it as much as you do.)
So we have taste in software.
I would rather not maintain a system that was built on quicksand, where dependencies cannot be upgraded without breaking anything.
One person's super elegant architecture is Not Understandable™ to someone else.
This is an interesting perspective which I'm inclined to disagree with. There's little pleasure to be found in having to deal with a system that broke because it was badly designed or implemented, although I guess it means you've got a reasonably secure job for the time being. Being able to gradually refactor it can be fun sometimes I guess, but I'd still rather not have to.
Your second category is more interesting to me - you're interpreting a system is hard to understand and work on as being made by super intelligent people. I would interpret that as a system that was badly designed, unless you're doing some new and revolutionary thing (you're probably not). A system that has been designed in such a way that only someone with deep knowledge of the thought process can work on it has been designed badly. I know this because I have in the past designed many such systems. Coming back to them a few years later even I hated myself for it, so I'm deeply sympathetic to the people who had to work on them who weren't me. Thankfully in most cases I got to task a few people with ripping out the system and replacing it with something better.
But funny: I was trying to think of "good" systems that I ever worked on, but drew a blank. It can't be that I only worked on bad code, right? Maybe this is one of those "when everyone around you is an asshole..." situations!
But now that I actually think deeper about it, the reason I don't remember doing a lot of work in good systems is because I barely had to touch them. They just worked, scaled fine, required very little maintenance.
And on those good systems, building new features was painless: they were always super simple and super familiar to newcomers (using default framework features instead of fancy libraries), because they never deviated from the norm. Things would also pretty much never break because there were failsafes (both in code/infra/linters/etc and in process, like code review).
At my previous job the other person working in our backend was the CTO, which worked part-time and had lots of CTO attributions. I remember spending about 20 hours tops in the span of 2 years on that backend. It was THAT good.
I am very susceptible to the ‘Misapplied Genericity’ anti-pattern. When given a problem, my default approach seems to be building a solution which ends up looking more like a framework that allows you then to build the solution in it. For example - if I were creating a metrics dashboard, I would end up building a dashboard builder which I could then create my metrics dashboard in, rather than just ‘hardcoding’ the dashboard I need right now. Something I need to work on!
I do that as well, but I have eventually managed to train myself to do the hard-coded version first, find out I need to make it more generic anyway and thus validate the need for the genericity, but also at that point I have a much better understanding what exactly benefits from being flexible and where I can take some shortcuts compared to what in my mind is the perfect design. That way I end up building something in between which usually works quite well and I am pleased with.
Yup. Early in my career I was susceptible to this as well. Partly immaturity and partly business pressure (we need this thing now, and when the business adapts, we don't want to spend more engineering resources on it! make it work for the future!)
turns out that's nearly impossible, in most cases (businesses change)
I definitely take a more iterative approach now. There's a short spike window to architect the rough plan, get buy in from other engineers, and as long as we feel like we're directionally going the right way and we're not digging ourselves into a corner, we ship it.
Sometimes that has resulted in redoing things (we made a mistake in our thinking), but those redos are minimal compared to the weeks/months we may have spent over-architecting something
I cannot give a yes or no answer in this case. "Generic" is too generic a term, to use it in this argument.
It depends on the whole project and circumstances. Usually I go with specifics and later refactor. The reason is simple: too often did I experience the case, that in order to change something in view, we had to alter the "generic layer". On the other hand, how can you build something generic, when you have not at least 2-3 use cases?
"But we will never refactor" - I am one of the very few, who do just that. I worked my way up from dev to senior manager in order to give people the freedom I always missed.
I feel that the framework for integration is the programming language itself, it's Turing complete.
Getting things to glue together in the right way is a challenge though which is probably why you want it to be data driven. But inevitably you need some flexibility or logic in your data processing so you end up building an expression engine and we get into "creating a inner system/platform effect".
Apart from the patterns that are obviously good or bad by definition, most of the patterns and architecture decisions have their pros and cons, and the focus is on understanding, discussing those tradeoffs, and going with tradeoffs that a team / company prefers to deal with. Monolith vs. microservices, synchronous vs. asynchronous communications, small events vs. fat events - the list can go on, there are no silver bullets or clearly right choices.
The project sounds successful overall to me. Yes, they had to do more than they thought going in. That describes most engineering efforts.
Does the author think that operating system API churn just won't affect native somehow? Or be improved when even more of your application surface area is in the native space?
A list of 'do-not' works for tasks that bottom out in science and the laws of physics. Like woodworking or fusion physics.
Thinking about coding lacks a connection to a scientific terminus point. Under the hood it's all binary and devs use a performance mindset. Making a list of prohibitions doesn't fit.
Knowing what problems you actually have usually requires human judgement after seeing the situation.
And everyone refactors to their own understanding and intuition.
And my intuition or understanding might not be identical or as advanced or as simple or insightful as yours. (EDIT: Your understanding that things are SIMPLE might be more advanced than mine, so I don't really understand it as much as you do.)
So we have taste in software.
I would rather not maintain a system that was built on quicksand, where dependencies cannot be upgraded without breaking anything.
One person's super elegant architecture is Not Understandable™ to someone else.
To each their own. I prefer to maintain a bad system because:
- I can make it better
- If something doesn't work as expected it's because of the current state of the system, not because of my lack of ability
On the other hand, I don't really like to maintain very good systems (crafted by very intelligent people) because:
- There's little I can do to make them better (I'm a regular Joe)
- If something breaks it's because of my ability as a programmer (all the shame on me)
So, it's like playing in two different leagues (but the paycheck is rather more or less the same, so that's nice).
Your second category is more interesting to me - you're interpreting a system is hard to understand and work on as being made by super intelligent people. I would interpret that as a system that was badly designed, unless you're doing some new and revolutionary thing (you're probably not). A system that has been designed in such a way that only someone with deep knowledge of the thought process can work on it has been designed badly. I know this because I have in the past designed many such systems. Coming back to them a few years later even I hated myself for it, so I'm deeply sympathetic to the people who had to work on them who weren't me. Thankfully in most cases I got to task a few people with ripping out the system and replacing it with something better.
But funny: I was trying to think of "good" systems that I ever worked on, but drew a blank. It can't be that I only worked on bad code, right? Maybe this is one of those "when everyone around you is an asshole..." situations!
But now that I actually think deeper about it, the reason I don't remember doing a lot of work in good systems is because I barely had to touch them. They just worked, scaled fine, required very little maintenance.
And on those good systems, building new features was painless: they were always super simple and super familiar to newcomers (using default framework features instead of fancy libraries), because they never deviated from the norm. Things would also pretty much never break because there were failsafes (both in code/infra/linters/etc and in process, like code review).
At my previous job the other person working in our backend was the CTO, which worked part-time and had lots of CTO attributions. I remember spending about 20 hours tops in the span of 2 years on that backend. It was THAT good.
turns out that's nearly impossible, in most cases (businesses change)
I definitely take a more iterative approach now. There's a short spike window to architect the rough plan, get buy in from other engineers, and as long as we feel like we're directionally going the right way and we're not digging ourselves into a corner, we ship it.
Sometimes that has resulted in redoing things (we made a mistake in our thinking), but those redos are minimal compared to the weeks/months we may have spent over-architecting something
It depends on the whole project and circumstances. Usually I go with specifics and later refactor. The reason is simple: too often did I experience the case, that in order to change something in view, we had to alter the "generic layer". On the other hand, how can you build something generic, when you have not at least 2-3 use cases?
"But we will never refactor" - I am one of the very few, who do just that. I worked my way up from dev to senior manager in order to give people the freedom I always missed.
Getting things to glue together in the right way is a challenge though which is probably why you want it to be data driven. But inevitably you need some flexibility or logic in your data processing so you end up building an expression engine and we get into "creating a inner system/platform effect".
https://en.wikipedia.org/wiki/Inner-platform_effect
Deleted Comment
One of the anti patterns is "making the system too complicated" Insightful!
The project sounds successful overall to me. Yes, they had to do more than they thought going in. That describes most engineering efforts.
Does the author think that operating system API churn just won't affect native somehow? Or be improved when even more of your application surface area is in the native space?
Thinking about coding lacks a connection to a scientific terminus point. Under the hood it's all binary and devs use a performance mindset. Making a list of prohibitions doesn't fit.