But given that I haven't seen any mention of that issue in other comments, I wonder if it really is an issue.
It's more "portable" than assembly so that the same optimizer passes can work on multiple architectures. The static-single-assignment restriction makes it easier to write compiler passes by removing the complexity that comes with multiple assignments.
There is no objective measure of code cleanliness. So if "clean code" is your goal, then you have no meaningful criteria to evaluate alternatives. (Including those pitched by Bob Martin.)
It gets worse, though. There's a subconscious element that causes even more trouble. It's obviously a good thing to write "clean code", right? (Who's going to argue otherwise?) And to do otherwise would be a moral failing.
The foundation on which "Uncle Bob" tries to build is rotten from the get-go. But it's a perfect recipe for dogmatism.
Most American drivers are familiar with slip lanes, which allow drivers to make a right-hand turns without necessarily coming to a stop. (Wikipedia article: https://en.wikipedia.org/wiki/Slip_lane. Note that the diagram shows an example from a country where they drive on the left side of the road.) They're convenient for drivers, but dangerous for anyone on foot or a bike. This is because a driver in a slip lane is looking for an opening in traffic in the direction opposite of the direction they're traveling.
A roundabout is basically an intersection made up of nothing but slip lanes. So they're fundamentally dangerous to pedestrians in the same way that a slip lane is, but the fact that vehicles are moving slower means that they're still safer than typical American intersections.
However, if I remember correctly (it's been over 20 years ago), in the UK they don't mix roundabouts and crosswalks. They'd put crosswalks (called "Zebra crossings": https://en.wikipedia.org/wiki/Zebra_crossing) between intersections, where drivers aren't distracted by other things. I think we need to do that in the US as well if we're going to adopt roundabouts.
If you’re storing doubly linked lists in a DB you’re doing it wrong.
Updating doubly linked lists can be done at about 200 million ops/sec, single threaded, not sure why you need multiple threads updating the list at the same time, exactly what are you doing that can't be solved by putting a ring buffer in front of a single thread that updates the values, doesn't need locks and is cache coherent.
Assuming that the database uses B+ trees (like most do), then the database records themselves are very likely to be in a doubly linked list.
Not every doubly linked list is the kind you see in an introductory data structures class.
But this is not TDD.
I've worked in HW industry where cost of bugs is high and we analyzed specs during brainstorming session as 3-5 ppl and brainstormed test cases that we want to test.
It worked well because this way we we're finding way more things to test than when doing it alone.
But still, this is not TDD. We weren't doing TDD.
>(Particularly, it seems to encourage the single-responsibility principle, as code that's doing too much or combining layers of abstraction is really hard to test.)
Whether your code is easily testable will be challenged by writing tests regardless of the moment of writing test - before or after writing impl.
So no benefit from TDD over non-TDD approach.
That's true. Did you miss the part where I mentioned the "TDD process" versus the "TDD mindset"?
> Whether your code is easily testable will be challenged by writing tests regardless of the moment of writing test - before or after writing impl.
I don't want to derail the conversation, but when I'm working on projects alone then my test coverage is 100%. (Line and branch coverage.) But surely that slows me down, right? And I must have a million test cases? No, quite the opposite actually. Writing code that way is just a skill though, the same as juggling, that becomes easy with enough practice. (Along with the right practices and tools.) However, I don't aim for 100% (or even a particularly high percentage, to be honest) when I'm working on projects with other people, because the only way it's possible is if the code was designed for it from the get-go.
I encourage junior developers to learn and try to use TDD for a while because it can be useful sometimes and improve the way they write code. But I would never require them to do it.
I've seen like bilion discussions about TDD and I still dont understand why is it so overhyped.
Additionally it sucks that for some people you either do TDD or dont write tests at all (what the f...., indeed)
This whole red-green step in TDD makes complete no sense when you're writing new code.
The only value provided by TDD when writing new code is that you're forced to think from caller/user perspective, so it makes your API design better, that's it.
Listing out the tests you're going to write before you write the code (even mentally) can be considered a continuation of the requirements-gathering process. And thinking about how you're going to test your code before you write it will, in my experience, improve the design. (Particularly, it seems to encourage the single-responsibility principle, as code that's doing too much or combining layers of abstraction is really hard to test.)
I used to write low-scale Java apps, and now I write memory intensive Go apps. I've often wondered what would happen if Go did have a JVM style GC.
It's relatively common in Go to resort to idioms that let you avoid hitting the GC. Some things that come to mind:
* all the tricks you can do with a slice that have two slice headers pointing to the same block of memory [1]
* object pooling, something so common in Go it's part of the standard library [2]
Both are technically possible in Java, but I've never seen them used commonly (though in fairness I've never written performance critical Java.) If Go had a more sophisticated GC, would these techniques be necessary?
Also Java is supposed to be getting value types soon (tm) [3]
[1] https://ueokande.github.io/go-slice-tricks/