Readit News logoReadit News
PheonixPharts · 4 years ago
The trouble with TDD is that quite often we don't really know how our programs are going to work when we start writing them, and often make design choices iteratively as we start to realize how our software should behave.

This ultimately means, what most programmers intuitively know, that it's impossible to write adequate test coverage up front (since we don't even really know how we want the program to behave) or worse, test coverage gets in the way of the iterative design process. In theory TDD should work as part of that iterative design, but in practice it means a growing collection of broken tests and tests for parts of the program that end up being completely irrelevant.

The obvious exception to this, where I still use TDD, is when implementing a well defined spec. Anytime you need to build a library to match an existing protocol, well documented api, or even an non-trivial mathematical function, TDD is a tremendous boon. But this is only because the program behavior is well defined.

The times where I've used TDD and it makes sense it's be a tremendous productivity increase. If you're implementing some standard you can basically write the tests to confirm you understand how the protocol/api/function works.

Unfortunately most software is just not well defined up front.

happytoexplain · 4 years ago
>Unfortunately most software is just not well defined up front.

This is exactly how I feel about TDD, but it always feels like you're not supposed to say it. Even in environments where features are described, planned, designed, refined, written as ACs, and then developed, there are still almost always pivots made or holes filled in mid-implementation. I feel like TDD is not for the vast majority of software in practice - it seems more like something useful for highly specialist contexts with extremely well defined objective requirements that are made by, for, and under engineers, not business partners or consumers.

ethbr0 · 4 years ago
I forget which famous Unix personality the quote / story comes from, but it amounts to "The perfect program is the one you write after you finish the first version, throw it in the garbage, and then handle in the rewrite all the things you didn't know that you didn't know."

That rings true to my experience, and TDD doesn't add much to that process.

gofreddygo · 4 years ago
Yes its not well defined, neither before nor after implementing. I've made peace with accepting it never will be.

An implementation without definition, and a whole host of assumptions gets delivered as v1.

Product expectations get lowered, bugs and defects raised, implementation is monkey patched as v2.

devs quit, managers get promoted, new managers hire new devs, they ask for the definition and they're asked to follow some flavor of the year process (TDD, Agile, whatever).... rinse and repeat v3.

Sad. True. Helpless. Hopeless.

8n4vidtmkvmk · 4 years ago
it doesn't matter how well your app is designed, your UX designer is not going to tell you you need a function that does X. you just build something that looks like the thing they want, and then write some tests to make sure someone doesn't break that thing, and if you have to write a dozen functions to create it and they're testable then you test them but you don't say oh no i can't write a 13th function now because that wasn't part of the preordained plan
theptip · 4 years ago
I think part of what you are getting at here also points to differences in what people mean by “unit test”.

It’s always possible to write a test case that covers a new high-level functional requirement as you understand it; part of the skill of test-first (disclaimer - I use this approach sometimes but not religiously and don’t consider myself a master at this) is identifying the best next test to write.

But a lot of people cast “unit test” as “test for each method on a class” which is too low-level and coupled to the implementation; if you are writing those sort of UTs then in some sense you are doing gradient descent with a too-small step size. There is no appreciable gradient to move down; adding a new test for a small method doesn’t always get you closer to adding the next meaningful bit of functionality.

When I have done best with TDD is when I start with what most would call “functional tests” and test the behaviors, which is isomorphic to the design process of working with stakeholders to think through all the ways the product should react to inputs.

I think the early TDD guys like Kent Beck probably assumed you are sitting next to a stakeholder so that you can rapidly iterate on those business/product/domain questions as you proceed. There is no “upfront spec” in agile, the process of growing an implementation leads you to the next product question to ask.

merlincorey · 4 years ago
> But a lot of people cast “unit test” as “test for each method on a class” which is too low-level and coupled to the implementation; if you are writing those sort of UTs then in some sense you are doing gradient descent with a too-small step size. There is no appreciable gradient to move down; adding a new test for a small method doesn’t always get you closer to adding the next meaningful bit of functionality.

In my experience, the best time to do "test for each method on a class" or "test for each function in a module" is when the component in question is a low level component in the system that must be relied upon for correctness by higher level parts of the system.

Similarly, in my experience, it is often a waste of effort and time to do such thorough low level unit testing on higher level components composed of multiple lower level components. In those cases, I find it's much better to write unit tests at the highest level possible (i.e. checking `module.top_level_super_function()` inputs produce expected outputs or side effects)

commandlinefan · 4 years ago
> a lot of people cast “unit test” as “test for each method on a class” which is too low-level

Definitely agree with you here - I've seen people dogmatically write unit tests for getter and setter methods at which point I have a hard time believing they're not just fucking with me. However, there's a "sweet spot" in between writing unit tests on every single function and writing "unit tests" that don't run without a live database and a few configuration files in specific locations, which (in my experience) is more common when you ask a mediocre programmer to try to write some tests.

mkl95 · 4 years ago
> But a lot of people cast “unit test” as “test for each method on a class” which is too low-level and coupled to the implementation;

Those tests suit a project that applies the open-closed principle strictly, such as libraries / packages that will rarely be modified directly and will mostly be used by "clients" as their building blocks.

They don't suit a spaghetti monolith with dozens of leaky APIs that change on every sprint.

The harsh truth is that in the industry you are more likely to work with spaghetti code than with stable packages. "TDD done right" is a pipe dream for the average engineer.

slevcom · 4 years ago
This is well said.

I always suspect that many people who have a hard time relating to TDD already have experience writing these class & method oriented tests. So they understandably struggle with trying to figure out how to write them before writing the code.

Thinking about tests in terms of product features is how it clicked for me.

That being said, as another poster above mentioned, using TDD for unstable or exploratory features is often unproductive. But that’s because tests for uncertain features are often unproductive, regardless if you wrote them before or after.

I once spent months trying to invent a new product using TDD. I was constantly deleting tests because I was constantly changing features. Even worse, I found myself resisting changing features that needed changing because I was attached to the work I had done to test them. I eventually gave up.

I still use TDD all the time, but not when I’m exploring new ideas.

P5fRxh5kUvp2th · 4 years ago
The above poster used 'TDD', not 'unit test', they are not the same thing.

You can (and often should!) have a suite of unit tests, but you can choose to write them after the fact, and after the fact means after most of the exploration is done.

I think if most people stopped thinking of unit tests as a correctness mechanism and instead thought of them as a regression mechanism unit tests as a whole would be a lot better off.

generalk · 4 years ago
+1 on "well defined spec" -- a lot of Healthcare integrations are specified as "here's the requests, ensure your system responds like this" and being able to put those in a test suite and know where you're at is invaluable!

But TDD is fantastic for growing software as well! I managed to save an otherwise doomed project by rigorously sticking to TDD (and its close cousin Behavior Driven Development.)

It sounds like you're expecting that the entire test suite ought to be written up front? The way I've had success is to write a single test, watch it fail, fix the failure as quickly as possible, repeat, and then once the test passes fix up whatever junk I wrote so I don't hate it in a month. Red, Green, Refactor.

If you combine that with frequent stakeholder review, you're golden. This way you're never sitting on a huge pile of unimplemented tests; nor are you writing tests for parts of the software you don't need. For example from that project: week one was the core business logic setup. Normally I'd have dove into users/permissions, soft deletes, auditing, all that as part of basic setup. But this way, I started with basic tests: "If I go to this page I should see these details;" "If I click this button the status should update to Complete." Nowhere do those tests ask about users, so we don't have them. Focus remains on what we told people we'd have done.

I know not everyone works that way, but damn if the results didn't make me a firm believer.

wenc · 4 years ago
The problem I’ve run into is that when you’re iterating fast, writing code takes double the time when you also have to write the tests.

Unit tests are still easy to write but most complex software have many parts that combine combinatorially and writing integration tests requires lots of mocking. This investment pays off when the design is stable but when business requirements are not that stable this becomes very expensive.

Some tests are actually very hard to write — I once led a project that where the code had both cloud and on-prem API calls (and called Twilio). Some of those environments were outside our control but we still had to make sure they we handled their failure modes. The testing code was very difficult to write and I wished we’d waited until we stabilized the code before attempting to test. There were too many rabbit holes that we naturally got rid of as we iterated and testing was like a ball and chain that made everything super laborious.

TDD also represents a kind of first order thinking that assumes that if the individual parts are correct, the whole will likely be correct. It’s not wrong but it’s also very expensive to achieve. Software does have higher order effects.

It’s like the old car analogy. American car makers used to believe that if you QC every part and make unit tolerances tight, you’ll get a good car on final assembly (unit tests). This is true if you can get it right all the time but it made US car manufacturing very expensive because it required perfection at every step.

Ironically Japanese carmakers eschewed this and allowed loose unit tolerances, but made sure the final build tolerance worked even when the individual unit tolerances had variation. They found this made manufacturing less expensive and still produced very high quality (arguably higher quality since the assembly was rigid where it had to be, and flexible where it had to be). This is craftsman thinking vs strict precision thinking.

This method is called “functional build” and Ford was the first US carmaker to adopt it. It eventually came to be adopted by all car makers.

https://www.gardnerweb.com/articles/building-better-vehicles...

tsimionescu · 4 years ago
There are two problems I've seen with this approach. One is that sometimes the feature you implemented and tested turns out to be wrong.

Say, initially you were told "if I click this button the status should update to complete", you write the test, you implement the code, rinse and repeat until a demo. During the demo, you discover that actually they'd rather the button become a slider, and it shouldn't say Complete when it's pressed, it should show a percent as you pull it more and more. Now, all the extra care you did to make sure the initial implementation was correct turns out to be useless. It would have been better to have spent half the time on a buggy version of the initial feature, and found out sooner that you need to fundamentally change the code by showing your clients what it looks like.

Of course, if the feature doesn't turn out to be wrong, then TDD was great - not only is your code working, you probably even finished faster than if you had started with a first pass + bug fixing later.

But I agree with the GP: unclear and changing requirements + TDD is a recipe for wasted time polishing throw-away code.

Edit: the second problem is well addressed by a sibling comment, related to complex interactions.

andix · 4 years ago
TDD usually means that you write the tests before writing the code.

Writing tests as you write the code is just regular and proper software development.

twic · 4 years ago
No, this is nonsense. You don't write the test coverage up front!

You think of a small chunk of functionality you are comfident about, write the tests for that (some people say just one test, i am happy with up to three or so), then write the implementation that makes those tests pass. Then you refactor. Then you pick off another chunk and 20 GOTO 10.

If at some point it turns out your belief about the functionality was wrong, fine. Delete the tests for that bit, delete the code for it, make sure no other tests are broken, refactor, and 20 GOTO 10 again.

The process of TDD is precisely about writing code when you don't know how the program is going to work upfront!

On the other hand, implementing a well-defined spec is when TDD is much less useful, because you have a rigid structure to work to in both implementation and testing.

I think the biggest problem with TDD is that completely mistaken ideas about it are so widespread that comments like this get upvoted to the top even on HN.

switchbak · 4 years ago
I feel like I'm in crazy town in this thread. Most of the replies seem to be misunderstanding the intent of TDD, and yours is one of the few that gets it right.

Is general understanding of TDD really that far off the mark? I had no idea, and I've been doing this for essentially 2 decades now.

shados · 4 years ago
The big issue I see when people have trouble with TDD is really a cultural one and one around the definition of tests, especially unit tests.

If you're thinking of unit tests as the thing that catches bugs before going to production and proves your code is correct, and want to write a suite of tests before writing code, that is far beyond the capabilities of most software engineers in most orgs, including my own. Some folks can do it, good for them.

But if you think of unit tests as a way to make sure individual little bits of your code work as you're writing them (that is, you're testing "the screws" and "the legs" of the tables, not the whole table), then it's quite simple and really does save time, and you certainly do not need full specs or even know what you're doing.

Write 2-3 simple tests, write a function, write a few more tests, write another function, realize the first function was wrong, replace the tests, write the next function.

You need to test your code anyway and type systems only catch so much, so even if you're the most agile place ever and have no idea how the code will work, that approach will work fine.

If you do it right, the tests are trivial to write and are very short and disposable (so you don't feel bad when you have to delete them in the next refactor).

Do you have a useful test suite to do regression testing at the end? Absolutely not! In the analogy, if you have tests for a screw attaching the leg of a table, and you change the type of legs and the screws to hook them up, of course the tests won't work anymore. What you have is a set of disposable but useful specs for every piece of the code though.

You'll still need to write tests to handle regressions and integration, but that's okay.

Scarblac · 4 years ago
And I think most people who don't write tests in code work that way anyway, just manually -- they F5 the page, or run the code some other way.

But the end result of writing tests is often that you create a lot of testing tied to what should be implementation details of the code.

E.g. to write "more testable" code, some people advocate making very small functions. But the public API doesn't change. So if you test only the internal functions, you're just making it harder to refractor.

0x457 · 4 years ago
Many people have a wrong perception of TDD. The main idea is to break a large, complicated thing into many small ones until there is nothing left, like you said.

You're not supposed to write every single test upfront, you write a tiny test first. Then you add more and refactor your code, repeat until there is nothing left of that large complicated thing you were working on.

There are also people who test stupid things and 3rd party code in their tests and either they get a fatigue from it and/or think their tests are well written.

thrwyoilarticle · 4 years ago
>If you do it right, the tests are trivial to write and are very short and disposable (so you don't feel bad when you have to delete them in the next refactor).

The raison d'etre of TDD is that developers can't be trusted to write tests that pass for the right reason - that they can't be trusted to write code that isn't buggy. Yet it depends on them being able to write tests with enough velocity that they're cheap enough to dispose?

sanderjd · 4 years ago
Yep, TDD for little chunks of code is really nice, I think of it like just a more structured way to trying things out in a repl as you go (and it works for languages without repls). Even if you decide to never check the test in because the chunk of code ended up being too simple for a regression test to be useful, if it was helpful in testing assumptions while developing the code, that's great.

But yeah, trying to write all the tests for a whole big component up front, unless it's for something with a stable spec (eg. I once implemented some portions of the websockets spec in servo, and it was awesome to have an executable spec as the tests), is usually an exercise in frustration.

larschdk · 4 years ago
I think we should try and separate exploration from implementation. Some of the ugliest untestable code bases I have worked with have been the result of some one using exploratory research code for production. It's OK to use code to figure out what you need to build, but you should discard it and create the testable implementation that you need. If you do this, you won't be writing tests up front when exploring the solution space, but you will be when doing the final implementation.
codereviewed · 4 years ago
Have you ever had to convince a non-technical boss or client that the exploratory MVP you wrote and showed to them working must be completely rewritten before going into production? I tried that once when I attempted to take us down the TDD route and let me tell you, that did not go over well.

People blame engineers for not writing tests or doing TDD when, if they did, they would likely be replaced with someone who can churn out code faster. It is rare, IME, to have culture where the measured and slow progress of TDD is an acceptable trade off.

is0tope · 4 years ago
I've always favored exploration before implementation [1]. For me TDD has immense benefit when adding something well defined, or when fixing bugs. When it comes to building something from scratch i found it to get in the way of the iterative design process.

I would however be more amenable to e.g. Prototyping first, and then using that as a guide for TDD. Not sure if there is a name for that approach though. "spike" maybe?

[1] https://www.machow.ski/posts/galls-law-and-prototype-driven-...

andix · 4 years ago
Most projects don’t have the budget to rewrite the code, once it is working.
gabereiser · 4 years ago
I think this is the reasonable approach I take. It's ok to explore and figure out the what. Once you know (or the business knows) then it's time to write a final spec and test coverage. In the end, the mantra should be "it's just code".
happytoexplain · 4 years ago
This makes sense, but I think many (most?) pipelines don't allow for much playtime because they are too rigid and top-down. At best you will convince somebody that a "research task" is needed, but even that is just another thing you have to get done in the same given time frame. Of course this is the fault of management, not of TDD.
silversmith · 4 years ago
> The trouble with TDD is that quite often we don't really know how our programs are going to work

Interesting - for me, that's the only time I truly practice TDD, when I don't know how the code is going to work. It allows me to start with describing the ideal use case - call the API / function I would like to have, describe the response I would expect. Then work on making those expectations a reality. Add more examples. When I run into a non-trivial function deeper down, repeat - write the ideal interface to call, describe the expected response, make it happen.

For me, TDD is the software definition process itself. And if you start with the ideal interface, chances are you will end up with something above average, instead of whatever happened to fall in place while arranging code blocks.

BurningFrog · 4 years ago
Agile, as the name hints, was developed precisely to deal with ever changing requirements. In opposition to various versions of "first define the problem precisely, then implement that in code, and then you're done forever".

So the TDD OP describes here is not an Agile TDD.

The normal TDD process is:

    1. add one test
    2. make it (and all others) pass
    3. maybe refactor so code is sane
    4. back to 1, unless you're done.
When requirements change, you go to 1 and start adding or changing tests, iterate until you're done.

tra3 · 4 years ago
Exactly. Nobody's on board with paying at least twice as much for software though. But that's what you get when things change and you have to refactor BOTH your code AND your tests.
pjmlp · 4 years ago
Add one test for GUI code.....
Alex3917 · 4 years ago
> The trouble with TDD is that quite often we don't really know how our programs are going to work when we start writing them

Even if you know exactly how the software is going to work, how would you know if your test cases are written correctly without having the software to run them against? For that reason alone, the whole idea of TDD doesn't even make sense to me.

rcxdude · 4 years ago
One reason why TDD can be a good idea is the cycle involves actually testing the test cases: if you write the test, run it and see that it fails, then write the code, then run the test again and see that it succeeds, you can have some confidence the test is actually testing something (not necessarily the right thing, but at least something). Wheras if you're writing the test after writing the code and expect that it will succeed the first time you run it, it's quite possible to write a test which doesn't actually test anything and will always succeed. (There are other techniques like mutation testing which may get you a more robust indication that your tests actually depend on the state of your software, but I've rarely seem them used in practice).
Byamarro · 4 years ago
Most tests shouldn't be hard to read and reason about so it shouldn't be a problem. In case of more complex tests, you can do it like you would do it during iterative development - debug tests and code to figure out what's wrong - nothing changes here.
majikandy · 4 years ago
It’s funny that in your paragraph there I thought you were about to write… “for that reason alone, TDD is the only way that makes sense to me.”

The reason is, the tests and the code are symbiotic, your tests prove the code works and your code proves the tests are correct. TDD guarantees you always have both of those parts. But granted it is not the only way to get those 2 parts.

You can still throw into the mix times when a bug is present, and it is this symbiotic relationship that helps you find the bug fast, change the test to exercise the newly discovered desired behaviour, see the test go red for the correct reason and then make the code tweak to pass the test (and see all the other tests still pass).

regularfry · 4 years ago
Because the test you've just written (and only that test) fails in the way you expect when you run the suite.
f1shy · 4 years ago
This is exactly my problem with TDD. Note this problem is not only in SW. For any development you do, you could start with designing tests. You can do for some HW for sure. If you want to apply TDD to any other development, you see pretty fast, what the problem is: you are going to design lots of tests, that a at the end will not be used. A total waste. Also with TDD often it will be centered in quantity of tests and not so much quality.

What I find is much much better approach is what I call "detached test development" (DTD). The idea is: 2 separate teams get the requirements; one team writes code, the other write tests. They do not talk to each other! Fist when a test is not passed, they have to discuss: is the requirement not clear enough? What is the part that A thought about, but not B? Assignment of tests and code can be mixed, so a team makes code for requirements 1 through 100, and tests for 101 to 200, or something like that. I had very very good results with such approach.

switchbak · 4 years ago
Who starts with designing just the tests? I have no idea how this is an association with TDD.

TDD is a feedback cycle, you write small increments of tests before writing a small bit of a code. You don't write a bunch of tests upfront, that'd be silly. The whole point is to integrate small amounts of learning as you go, which help guide the follow-on tests, as well as the actual implementation, not to mention questions to need to ask the broader business.

Your DTD idea has been tried a lot in prior decades. In fact, as a student I was on one of those testing teams. It's a terrible idea, throwing code over a wall like that is a great way to radically increase the latency of communication, and to have a raft of things get missed.

I have no idea why there's such common misconceptions of what TDD is. Maybe folks are being taught some really bad ideas here?

EddySchauHai · 4 years ago
> Also with TDD often it will be centered in quantity of tests and not so much quality.

100%. Metrics of quality are really really hard to define in a way that are both productive and not gamified by engineers.

> What I find is much much better approach is what I call "detached test development" (DTD)

I'm a test engineer and some companies do 'embed' an SDET like the way you mention within a team - it's not quite that clear cut, they can discuss, but it's still one person implementing and another testing.

I'm always happy to see people with thoughts on testing as a core part of good engineering rather than an afterthought/annoyance :)

ivan_gammel · 4 years ago
What you described is a quite common role of QA automation team, but it does not really replace TDD. Separate team working on a test can do it only relying on a remote contract (e.g. API, UI or database schema), they cannot test local contracts like a public interface of a class, because that would require the that code already to be written. In TDD you often write the code AND the test at the same time, integrating the test and the code in compile time.
thrwyoilarticle · 4 years ago
>2 separate teams get the requirements; one team writes code, the other write tests.

This feels a bit like when you write a layer of encapsulation to try to make a problem easier only to discover that all of the complexity is now in the interface. Isn't converting the PO's requirements into good, testable requirements the hard technical bit?

no_wizard · 4 years ago
thats kind of TDDs core point. you don't really know upfront, so you write tests to validate what you can define up front, and through that, you should find you discover other things that were not accounted for, and the cycle continues, until you have a working system that satisfies the requirements. Then all those tests serve as a basic form of documentation & reasonable validation of the software so when further modifications are desired, you don't break what you already know to be reasonably valid.

Therefore, TDD's secret sauce is in concretely forcing developers to think through requirements, mental models etc. and quantify them in some way. When you hit a block, you need to ask yourself whats missing, then figure out, and continue onward, making adjustments along the way.

This is quite malleable to unknown unknowns etc.

I think the problem is most people just aren't chunking down the steps of creating a solution enough. I'd argue that the core way of approaching TDD fights most human behavioral traits. It forces a sort of abstract level of reasoning about something that lets you break things down into reasonable chunks.

pjmlp · 4 years ago
I doubt ZFS authors would have succeeded designing it with TDD.
mcv · 4 years ago
Exactly. I use TDD in situations where it fits. And when it does, it's absolutely great. But there are many situations where it doesn't fit.

TDD is not a silver bullet, it's one tool among many.

majikandy · 4 years ago
I find it as close as I have ever found to a silver bullet.
yoden · 4 years ago
> test coverage gets in the way of the iterative design process. In theory TDD should work as part of that iterative design, but in practice it means a growing collection of broken tests and tests for parts of the program that end up being completely irrelevant.

So much of this is because TDD has become synonymous with unit testing, and specifically solitary unit testing of minimally sized units, even though that was often not the original intent of the ideators of unit testing. These tests are tightly coupled to your unit decomposition. Not the unit implementation (unless they're just bad UTs), but the decomposition of the software into which units/interfaces. Then the decomposition becomes very hard to change because the tests are exactly coupled to them.

If you take a higher view of unit testing, such as what is suggested by Martin Fowler, a lot of these problems go away. Tests can be medium level and that's fine. You don't waste a bunch of time building mocks for abstractions you ultimately don't need. Decompositions are easier to change. Tests may be more flaky, but you can always improve that later once you've understood your requirements better. Tests are quicker to write, and they're more easily aligned with actual user requirements rather than made up unit boundaries. When those requirements change, it's obvious which tests are now useless. Since tests are decoupled from the lowest level implementation details, it's cheap to evolve those details to optimize implementation details when your performance needs change.

eyelidlessness · 4 years ago
> The trouble with TDD is that quite often we don't really know how our programs are going to work when we start writing them, and often make design choices iteratively as we start to realize how our software should behave.

This is a trouble I often see expressed about static types. And it’s an intuition I shared before embracing both. Thing is, embracing both helped me overcome the trouble in most cases.

- If I have a type interface, there I have the shape of the definition up front. It’s already beginning to help verify the approach that’ll form within that shape.

- Each time I write a failing test, there I have begun to define the expected behavior. Combined with types, this also helps verify that the interface is appropriate, as the article discusses, though not in terms of types. My point is that it’s also verifying the initial definition.

Combined, types and tests are (at least a substantial part of) the definition. Writing them up front is an act of defining the software up front.

I’m not saying this works for everyone or for every use case. I find it works well for me in the majority of cases, and that the exception tends to be when integrating with systems I don’t fully understand and which subset of their APIs are appropriate for my solution. Even so writing tests (and even sometimes types for those systems, though this is mostly a thing in gradually typed languages) often helps lead me to that clarity. Again, it helps me define up front.

All of this, for what it’s worth, is why I also find the semantics of BDD helpful: they’re explicit about tests being a spec.

grepLeigh · 4 years ago
> Unfortunately most software is just not well defined up front.

This is true, and I think that's why TDD is a valuable exercise to disambiguate requirements.

You don't need to take an all/nothing approach. Even if you clarify 15-20% of the requirements enough to write tests before code, that's a great place to begin iterating on the murky 80%.

ParetoOptimal · 4 years ago
>Unfortunately most software is just not well defined up front.

Because for years people have practice with defining software iteratively, whether by choice or being forced by deadlines and agile.

That doesn't inherently make one or the other harder, it's just another familiarity problem.

TDD goes nicely with top-down design using something like Haskell's undefined to stub out functionality that typechecks and it's where clauses.

    myFunction = haveAParty . worldPeace . fixPoverty $ world
        where worldPeace = undefined
                   haveAParty = undefined
                   fixPoverty = undefined
Iterative designs usually suck to maintain and use because they reflect the organizational structure of your company. That'll happen anyway to an extent, but better abstractions to make future you and future co-workers lives easier are totally worth it.

julianlam · 4 years ago
I often come up with test cases (just the cases, not the actual logic) while writing the feature. However I am never in the mood to context switch to write the test, so I'll do the bare minimum. I'll flip over to the test file and write the `it()` boilerplate with the one-line test title and flip back to writing the feature.

By the time I've reached a point where the feature can actually be tested, I end up with a pretty good skeleton of what tests should be written.

There's a hidden benefit to doing this, actually. It frees up your brain from keeping that running tally of "the feature should do X" and "the feature should guard against Y", etc. (the very items that go poof when you get distracted, mind you)

majikandy · 4 years ago
I seem to remember this being mentioned in the original TDD book. To brain dump that next test scenario title you think of so as to get it out of your head and get back to the current scenario you are trying to make pass. So by the same idea as above, to not context switch between the part of the feature you are trying to get to work.
waynesonfire · 4 years ago
jeez, well defined spec? what a weird concept. Instead, we took a complete 180 and all we get are weekly sprints. just start coding, don't spend time understanding your problem. what a terrible concept.
vrotaru · 4 years ago
Even for something which is well defined up-front this can of dubious value. Converting an positive integer less than 3000 is well-defined task. Now if you try to write such a program using TDD what do you think will end up with?

Try it. Write a test for 1, and an implementation which passes that test then for 2, and so on.

Bellow is something written without any TDD (in Java)

    private static String convert(int digit, String one, String half, String ten)     {
    switch(digit) {
    case 0: return "";
    case 1: return one;
    case 2: return one + one;
    case 3: return one + one + one;
    case 4: return one + half;
    case 5: return half;
    case 6: return half + one;
    case 7: return half + one + one;
    case 8: return half + one + one + one;
    case 9: return one + ten;
    default:
    throw new IllegalArgumentException("Digit out of range 0-9: " + digit);
    }
    }

    public static String convert(int n) {
    if (n > 3000) {
    throw new IllegalArgumentException("Number out of range 0-3000: " + n);
    }

    return convert(n / 1000, "M", "", "") + 
        convert((n / 100) % 10, "C", "D", "M") +
        convert((n / 10) % 10, "X", "L", "C") +
                convert(n % 10, "I", "V", "X");
}

AnimalMuppet · 4 years ago
> In theory TDD should work as part of that iterative design, but in practice it means a growing collection of broken tests and tests for parts of the program that end up being completely irrelevant.

If you have "a growing collection of broken tests", that's not TDD. That's "they told us we have to have tests, so we wrote some, but we don't actually want them enough to maintain them, so instead we ignore them".

Tests help massively with iterating a design on a partly-implemented code base. I start with the existing tests running. I iterate by changing some parts. Did that break anything else? How do I know? Well, I run the tests. Oh, those four tests broke. That one is no longer relevant; I delete it. That other one is testing behavior that changed; I fix it for the new reality. Those other two... why are they breaking? Those are showing me unintended consequences of my change. I think very carefully about what they're showing me, and decide if I want the code to do that. If yes, I fix the test; if not, I fix the code. At the end, I've got working tests again, and I've got a solid basis for believing that the code does what I think it does.

mirzap · 4 years ago
> The trouble with TDD is that quite often we don't really know how our programs are going to work

> The obvious exception to this, where I still use TDD, is when implementing a well defined spec.

From my understanding (and experience), TDD is quite the opposite. It's most useful when you don't have the spec, don't have clue how software will work in the end. TDD creates the spec, iteratively.

pjmlp · 4 years ago
Unless we are talking about any kind of GUI.
Buttons840 · 4 years ago
When I've been serious about testing I'll usually:

    1. Hack in what I want in some exploratory way
    2. Write good tests
    3. Delete my hacks from step 1, and ensure all my new tests now fail
    4. Re-implement what I hacked together in step 1
    5. Ensure all tests pass
This allows you to explore while still retaining the benefits of TDD.

gleenn · 4 years ago
There's a name for it, it's called a "spike". You write a bunch of exploratory stuff, get the idea right, throw it all away (without even writing tests) and then come back doing TDD.
karmelapple · 4 years ago
For the software you're thinking about, do you have specific use cases or users in mind? Or are you building, say, an app for the first time, perhaps for a very early stage startup that is nowhere close to market fit yet?

We typically write acceptance tests, and they have been helpful either early on or later in our product development lifecycle.

Even if software isn't defined upfront, the end goal is likely defined upfront, isn't it? "User X should be able to get data about a car," or "User Y should be able to add a star ratings to this review," etc.

If you're building a product where you're regularly throwing out large parts of the UI / functionality, though, I suppose it could be bad. But as a small startup, we have almost never been in that situation over the many years we've been in business.

jonstewart · 4 years ago
It's funny, because I feel like TDD -- not just unit-testing, but TDD -- is most helpful when things aren't well-defined. I think back to "what's the simplest test that could fail?" and it helps me focus on getting some small piece done. From there, it snowballs and the code emerges. Obviously it's not always perfect, and something learned along the way spurs refactoring/redesign. That always strikes me as a natural process.

In many ways I guess I lean maximalist in my practices, and find it helpful, but I'd readily concede that the maximalist advocates are annoying and off-putting. I once had the opportunity to program with Ward Cunningham for a weekend, and it was a completely easygoing and pragmatic experience.

bitwize · 4 years ago
And this is why you use spike solutions, to explore the problem space without the constraints of TDD.

But spikes are written to be thrown away. You never put them into production. Production code is always written against some preexisting test, otherwise it is by definition broken.

gregmac · 4 years ago
> it's impossible to write adequate test coverage up front

I'm not sure what you mean by this. Why are the tests you're writing not "adequate" for the code you're testing?

If I read into this that you're using code coverage as a metric -- and perhaps even striving for as close to 100% as possible -- I'd argue that's not useful. Code coverage, as a goal, is perhaps even harmful. You can have 100% code coverage and still miss important scenarios -- this means the software can still be wrong, despite the huge effort put into getting 100% coverage and having all tests both correct and passing.

jwarden · 4 years ago
I wish I could remember who wrote the essay with the idea of tests as investment in protecting functionality. When after a bit of experimentation or iteration you think you have figured out more or less one part of how your software should behave, then you want to protect that result. It is worth investing in writing and maintaining a test to make sure you don't accidentally break this functionality.

Functionality based on a set of initial specs and a hazy understanding of the actual problem you are trying to solve might on the other hand might not be worth investing in protecting.

majikandy · 4 years ago
It sounds a little like you are trying to write all the tests to the spec up front? With TDD you are still allowed to change design choices as you go and as you realise how you want it to behave. That’s why the tests are one by one. In my experience, TDD carries the most value when you really don’t know where you are going, you write the first test and you start rolling, and somehow you end up at your destination and people think you were good at writing code but actually the code was writing itself in a way as it evolved its was to completeness.
Double_a_92 · 4 years ago
But often you don't know at all how to best solve a problem, since the solution will probably need to touch many existing code units somehow.

It might work if you are starting on some new, relatively self-contained feature...

quickthrower2 · 4 years ago
You can do TDD if you do something managers hate!

And that is, write code, chuck it away, start again.

Prototype your feature without TDD. Then chuck it away and build it again with TDD.

My guess is by doing so code quality and reduced technical debt pay more than what is lost in time.

Very few companies work like this I imagine: None that I have worked for.

Since keyboard typing is a short part of software development it is probably a great use of time and could catch more bugs and design quirks early on when they cost $200/h instead of $2000/h.

SomeCallMeTim · 4 years ago
That's one issue with TDD. I agree 100% in that respect.

Another partly orthogonal issue is that design is important for some problems, and you don't usually reach a good design by chipping away at a problem in tiny pieces.

TDD fanatics insist that it works for everything. Do I believe them that it improved the quality of their code? Absolutely; I've seen tons of crap code that would have benefited from any improvement to the design, and forcing it to be testable is one way to coerce better design decisions.

But it really only forces the first-order design at the lowest level to be decent. It doesn't help at all, or at least not much, with the data architecture or the overall data flow through the application.

And sometimes the only sane way to achieve a solid result is to sit down and design a clean architecture for the problem you're trying to solve.

I'm thinking of one solution I came up with for a problem that really wasn't amenable to the "write one test and get a positive result" approach of TDD. I built up a full tree data structure that was linked horizontally to "past" trees in the same hierarchy (each node was linked to its historical equivalent node). This data structure was really, really needed to handle the complex data constraints the client was requesting. As yes, we pushed the client to try to simplify those constraints, but they insisted.

The absolute spaghetti mess that would have resulted from TDD wouldn't have been possible to refactor into what I came up with. There's just no evolutionary path between points A and B. And after it was implemented and it functioned correctly--they changed the constraints. About a hundred times. I'm not even exaggerating.

Each new constraint required about 15 minutes of tweaking to the structure I'd created. And yes, I piled on tests to ensure it was working correctly--but the tests were all after the fact, and they weren't micro-unit tests but more of a broad system test that covered far more functionality than you'd normally put in a unit test. Some of the tests even needed to be serialized so that earlier tests could set up complex data and states for the later tests to exercise, which I understand is also a huge No No in TDD, but short of creating 10x as much testing code, much of it being completely redundant, I didn't really have a choice.

So your point about the design changing as you go is important, but sometimes even the initial design is complex enough that you don't want to just sit down and start coding without thinking about how the whole design should work. And no methodology will magically grant good design sense; that's just something that needs to be learned. There Is No Silver Bullet, after all.

ivan_gammel · 4 years ago
> Another partly orthogonal issue is that design is important for some problems, and you don't usually reach a good design by chipping away at a problem in tiny pieces.

True, but… you can still design the architecture, outlining the solution for the entire problem, and then apply TDD. In this case your architectural solution will be an input for low level design created in TDD.

agumonkey · 4 years ago
I remember early uml courses (based on pre Java / OO languages). They were all about modules and coupling dependencies. Trying to keep them low, and the modules not too defined. It seems that the spirit behind this (at least the only one that make sense to me) is you don't know, so you just want to avoid coupling hard early, leave the room for low cost adaptation while you discover how things will be.
ThalesX · 4 years ago
Whenever I start a greenfield frontend for someone they think I’m horrible in the first iteration. I tend to use style attributes and just shove CSS in there, and once I have enough things of a certain type I extract a class. They all love the result but distrust the first step.
marcosdumay · 4 years ago
At this point I doubt the existence of well defined specs.

Regulations are always ambiguous, standards are never followed, and widely implemented standards are never implemented the way the document tells.

You will probably still gain productivity by following TDD for those, but your process must not penalize too much changes in spec, because it doesn't matter if it's written in Law, what you read is not exactly what you will create.

mrjin · 4 years ago
TDD is not really about making designs right but preventing known good logic from being broken unexpectedly and repeatedly.
archibaldJ · 4 years ago
Thus spake the Master Programmer: "When a program is being tested, it is too late to make design changes."

- The Tao of Programming (1987)

jiggawatts · 4 years ago
This is precisely my experience also. I loved TDD when developing a parser for XLSX files to be used in a PowerShell pipeline.

I created dozens of “edge case” sample spreadsheets with horrible things in them like Bad Strings in every property and field. Think control characters in the tab names, RTL Unicode in the file description, etc…

I found several bugs… in Excel.

randomdata · 4 years ago
TDD isn't concerned with how your program works. In fact, implementation details leaking into your tests can become quite problematic, including introducing the problems you speak of. TDD is concerned with describing what your program should accomplish. If you don't know what you want to accomplish, what are you writing code for?
giantrobot · 4 years ago
The issue is what you want to accomplish is often tightly coupled with how it is accomplished. In order to have a test for what it needs to contain the context of how.

As a made up example. The "what" of the program is to take in a bunch of transactions and emit daily summaries. That's a straight forward "what". It however leaves tons of questions unanswered. Where does the data come from and in what format? Is it ASCII or Unicode? Do we control the source or is it from a third party? How do we want to emit the summaries? Printed to a text console? Saved to an Excel spreadsheet? What version of Excel? Serialized to XML or JSON? Do we have a spec for that serialized form? What precision do we need to calculate vs what we emit?

So the real "what" is: take in transaction data encoded as UTF-8 from a third party provider which lives in log files on the file system without inline metadata then translate the weird date format with only minute precision and lacking an explicit time zone and summarize daily stats to four decimal places but round to two decimal places for reporting and emit the summaries as JSON with dates as ISO ordinal dates and values at two decimal places saved to an FTP server we don't control.

While waiting for all that necessarily but often elided detail you can either start writing some code with unit test or wait and do no work until you get a fully fleshed out spec that can serve as the basis for writing tests. Most organizations want to start work even while the final specs of the work are being worked on.

wvenable · 4 years ago
> If you don't know what you want to accomplish, what are you writing code for?

Often times you write to find what you want to accomplish. It sounds backwards, perhaps it is backwards, but it's also very human. Without something to show the user, they often have no idea what they want. In fact, people are far better at telling you what's wrong with what's presented to them then enumerating everything they want ahead of time.

TDD is great but also completely useless for sussing requirements out of users.

gjadi · 4 years ago
Isn't the issue because we are reluctant to remove stuff? In the same vein as other said we should throw away one or two version of a program before shipping it.

Maybe we need to learn how to delete stuff that doesn't make sense.

Get rid of broken test. Get rid of incorrect documentation.

Don't be afraid to delete stuff to improve the overall program.

eitally · 4 years ago
I still remember a project (I was the eng director and one of my team leads did this) where my team lead for a new dev project was given a group of near-shore SWEs + offshore SQA who were new to both the language & RDBMS of choice, and also didn't have any business domain experience. He decided that was exactly the time to implement TDD, and he took it upon himself to write 100% test coverage based on the approved specs, and literally just instructed the team to write code to pass the tests. They used daily stand-ups to answer questions, and weekly reviews to assess themes & progress. It was slow going, but it was a luxurious experience for the developers, many of whom were using pair programming at the time and now found themselves on a project where they had a committed & dedicated senior staffer to actively review their work and coach them through the project (and new tools learnings). I had never allowed a project to be run like that before, but it was one where we had a fairly flexible timeline as long as periodic deliverables were achieved, so I used it as a kind of science project to see how something that extreme would fare.

The result was that 1) the devs were exceptionally happy, 2) the TL was mostly happy, except with some of the extra forced work he created for himself as the bottleneck, 3) the project took longer than expected, and 4) the code was SOOOOO readable but also very inefficient. We realized during the project that forcing unit tests for literally everything was also forcing a breaking up of methods & functions into much smaller discrete pieces than would have been optimal from both performance & extensibility perspectives.

It wasn't the last TDD project we ran, but we were far more flexible after that.

I had one other "science project" while managing that team, too. It was one where we decided to create an architect role (it was the hotness at that time), and let them design everything from the beginning, after which the dev team would run with it using their typical agile/sprint methodology. We ended up with the most spaghetti code of abstraction upon abstraction, factories for all sorts of things, and a codebase that became almost unsupportable from the time it was launched, necessitating v2.0 be a near complete rewrite of the business logic and a lot of the data interfaces.

The lessons I learned from those projects was that it's important to have experienced folks on every dev team, and that creating a general standard that allows for flexibility in specific architectural/technical decisions will result in higher quality software, faster, than if one is too prescriptive (either in process or in architecture/design patterns). I also learned that there's no such thing as too much SQA, but that's for a different story.

hgomersall · 4 years ago
Since I've moved to full time rust I'm finding it much harder to precede the code with tests (ignoring for a moment the maximalist/minimalist discussion). I think it's the because the abstractions can be so powerful that the development process is iterating over high level abstractions. The bit I worry about testing is the business logic, but that in my experience is not something you can test with a trivial unit test, and that test tends to iterate with the design to some extent. Essentially I end up with a series of behavioural tests and an implementation that as far as possible can't take inputs that can be mishandled (through e.g. the newtype pattern, static constraints etc).

I'm not quite sure what is right or wrong about my approach, but I do find the code tends to work and work reliably once it compiles and the tests pass.

lupire · 4 years ago
It's Test Driven Development, not Test Driven Research.

Very few critics notice this.

anonymoushn · 4 years ago
Maybe you disagree with GP about whether one should always do all their research without actually learning about the problem by running code?
wodenokoto · 4 years ago
This rings very true for me.

I write tdd when doing advent of code. And it’s not that I set out to do it or to practice it or anything. It just comes very natural to small, well defined problems.

smrtinsert · 4 years ago
I don't see how you can develop anything without at least technical clarity on what the components of your system should do.
AtlasBarfed · 4 years ago
Yeah, TDD has way too much "blame the dev" for the usual cavalcade of organizational software process failures.
fsdghrth3 · 4 years ago
> This ultimately means, what most programmers intuitively know, that it's impossible to write adequate test coverage up front

Nobody out there is writing all their tests up front.

TDD is an iterative process, RED GREEN REFACTOR.

- You write one test.

- Write JUST enough code to make it pass.

- Refactor while maintaining green.

- Write a new test.

- Repeat.

I don't want this to come off the wrong way but what you're describing shows you are severely misinformed about what TDD actually is or you're just making assumptions about something based on its name and nothing else.

Supermancho · 4 years ago
Writing N or 1 tests N times, depending on how many times I have to rewrite the "unit" for some soft idea of completeness. After the red/green 1 case, it necessarily has to expand to N cases as the unit is rewritten to handle the additional cases imagined (boundary, incorrect inputs, exceptions, etc). Now I see that I could have created optimizations in the method and rewrite it again and leverage the existing red/green.

Everyone understands the idea, it's just a massive time sink for no more benefit than a test-after methodology provides.

gjulianm · 4 years ago
> - You write one test.

> - Write JUST enough code to make it pass.

Those two steps aren't really trivial. Even just writing the single test might require making a lot of design decisions that you can't really make up-front without the code.

yibg · 4 years ago
The first test is never the problem. The problem as OP pointed out is after iterating a few times you realize you went down the wrong track or the requirements have changed / been clarified. Now a lot of the tests you iterated through aren't relevant anymore.
happytoexplain · 4 years ago
In my admittedly not-vast experience, a pattern going bad because the implementer doesn't understand it is actually only the implementer's fault a minority of the time, and is the fault of the pattern the majority of the time. This is because a pattern making sense to an implementer requires work from both sides, and which side is slacking can vary. Sometimes the people who get it and like it tend to purposefully overlook this pragmatic issue because "you're doing it wrong" seems like a golden bullet to critiques.
_gabe_ · 4 years ago
Reiterating the same argument in screaming case doesn't bolster your argument. It feels like the internet equivalent of a real life debate where a debater thinks saying the same thing LOUDER makes a better argument.

> - You write one test

Easier said than done. Say your task is to create a low level audio mixer which is something you've never done before. Where do you even begin? That's the hard part.

Some other commenters here have pointed out that exploratory code is different from TDD code, which is a much better argument then what you made here imo.

> I don't want this to come off the wrong way but what you're describing shows you are severely misinformed about what TDD actually is or you're just making assumptions about something based on its name and nothing else.

Instead of questioning the OP's qualifications, perhaps you should hold a slightly less dogmatic opinion. Perhaps OP is familiar with this style of development, and they've run into problem firsthand when they've tried to write tests for an unknown problem domain.

DoubleGlazing · 4 years ago
In my experience the write a new test bit is where it all falls down. It's too easy to skimp out on that when there are deadlines to hit or you are short staffed.

I've seen loads of examples where the tests haven't been updated in years to take account of new functionality. When that happens you aren't really doing TDD anymore.

unrealhoang · 4 years ago
How to write that one test without the iterative design process? That's something always missing from the TDD guides.

Deleted Comment

Deleted Comment

sedachv · 4 years ago
TDD use would be a lot different if people actually bothered to read the entirety of Kent Beck's _Test Driven Development: By Example_. It's a lot to ask, because it is such a terribly written book, but there is one particular sentence where Beck gives it away:

> This has happened to me several times while writing this book. I would get the code a bit twisted. “But I have to finish the book. The children are starving, and the bill collectors are pounding on the door.”

Instead of realizing that Kent Beck stretched out an article-sized idea into an entire book, because he makes his money writing vague books on vague "methodology" that are really advertising brochures for his corporate training seminars, people actually took the thing seriously and legitimately believed that you (yes, you) should write all code that way.

So a technique that is sometimes useful for refactoring and sometimes useful for writing new code got cargo-culted into a no-exceptions-this-is-how-you-must-do-all-your-work Law by people that don't really understand what they are doing anymore or why. Don't let the TDD zealots ruin TDD.

evouga · 4 years ago
This seems to be the case with a lot of "methodologies" like TDD, Agile, XP, etc. as well as "XXX considered harmful"-style proscriptions.

A simple idea ("hey, I was facing a tricky problem and this new way of approaching it worked for me. Maybe it will help you too?") mutates into a blanket law ("this is the only way to solve all the problems") and then pointy-haired folks notice the trend and enshrine it into corporate policy.

But Fred Brooks was right: there are no silver bullets. Do what works best for you/your team.

bitwize · 4 years ago
The 2000s design-patterns-mania is another case. Design patterns should be thought of less as things you have to memorize and apply in a textbook fashion, and more like tropes: things you'll see over and over in code, and once you know their names you can start talking about them and their interactions in meaningful ways. Just as writers like tropes because they make the job of writing easier, overuse of them is a sign of laziness; and so it is with design patterns.
cpill · 4 years ago
yeah, I find software engineers like to find absolute answers to fuzzy problems. I guess it's the nature of the job
joshka · 4 years ago
The fun thing about this book (which I haven't read in it's entirety) is that it really shuts down a lot of the maximalist ideas in a few places (here's one particular section).

  There are really two questions lurking here: 
  How much ground should each test cover?
  How many intermediate stages should you go through as you refactor?
  You could write the tests so they each encouraged the addition of a single line of logic and a handful of refactorings. You could write the tests so they each encouraged the addition of hundreds of lines of logic and hours of refactoring. Which should you do?
  Part of the answer is that you should be able to do either. The tendency of Test-Driven Developers over time is clear, though - smaller steps. However, folks are experimenting with driving development from application-level tests, either alone or in conjunction with the programmer-level tests we've been writing.

viceroyalbean · 4 years ago
Indeed. I read the book in hopes of getting a good intro to TDD after only picking it up by osmosis (which, as proven by the discussions here, is not a good way to learn TDD) and it definitely goes against the maximalist interpretation as described in TFA. While there are examples showing the minimal code-approach he is very explicit about the fact that you don't have to write your code that way.

One thing I liked specifically was his emphasis on the idea that you can use TDD to adjust the size of your steps to match the complexity of the code. Very complex? Small steps with many tests, maybe using the minimal code-approach to get things going. Simple/trivial? A single test and the solution immediately with no awkward step in between.

loevborg · 4 years ago
You have got to be kidding. Beck's book - both TDD: By Example and Extreme Programming - are very well written and have about the highest signal/noise ratio of any programming books.
sedachv · 4 years ago
_Test Driven Development: By Example_ certainly had the highest ratio of dumb unnecessary jokes to contrived unconvincing examples of any programming book I have read. My copy of TAOCP volume 3 doesn't even begin to compare. Clearly Knuth was doing something wrong.
yomkippur · 4 years ago
> > This has happened to me several times while writing this book. I would get the code a bit twisted. “But I have to finish the book. The children are starving, and the bill collectors are pounding on the door.”

I wonder how much methodologies, books are written with the same banal driver. It is somebody's livelihood and they don't pay writers to stop middle of it because they realize its flawed.

I once found a book on triangular currency arbitrage or something like that at my library. It was 4000 pages long and the book was heavy. The book would ramble on in languages that made it difficult to follow and would be filled with mathmetical notations to the brim which really offered no value because the book was written in the 70s and it no longer offered any executable knowledge. But finance schools swear by it and speaking out would trigger a lot of people.

TDD is a cult. Science is also a cult in that manner, it rejects the existence of what it cannot measure and it gangs up on those that go against it.

Deleted Comment

tippytippytango · 4 years ago
The main reason TDD hasn't caught on is there's no evidence it makes a big difference in the grand scheme of things. You can't operationalize it at scale either. There is no metric or objective test that you can run code through that will give you a number in [0, 1] that tells you the TDDness of the code. So if you decide to use TDD in your business, you can't tell the degree of compliance with the initiative or correlation with any business metrics you care about. The customers can't tell if the product was developed with TDD.

Short of looking over every developer's shoulder, how do you actually know the extent to which TDD is being practiced as prescribed? (red, green, refactor) Code review? How do you validate your code reviewer's ability to identify TDD code? What if someone submits working tested code; but, you smell it's not TDD, what then? Tell them to pretend they didn't write it and start over with the correct process? What part of the development process to you start to practice it? Do you make the R&D people do it? Do you make the prototypers do it? What if the prototype got shipped into production?

Because of all this, even if the programmers really do write good TDD code, the business people still can't trust you, they still have to QA test all your stuff. Because they can't measure TDD, they have no idea when you are doing it. Maybe you did TDD for the last release; but, are starting to slip? Who knows, just QA the product anyways.

I like his characterization of TDD as a technique. That's exactly what it is, a tool you use when the situation calls for it. It's a fantastic technique when you need it.

mehagar · 4 years ago
You make a good point about not being able to enforce that TDD is actually followed. The best we could do is check that unit tests exist at all.

In theory, if TDD really reduces the number of bugs and speeds up development you would see if reflected in those higher level metrics that impact the customer.

agloeregrets · 4 years ago
> In theory, if TDD really reduces the number of bugs and speeds up development you would see if reflected in those higher level metrics that impact the customer.

The issue is that many TDD diehards believe that bugs and delays are made by coders who did not properly qualify their code before they wrote it.

In reality, bugs and delays are a product of an organization. Bad coders can write bad tests that pass bad code just fine. Overly short deadlines will cause poor tests. Furthermore, many coders reply that they have trouble with the task-switching nature of TDD. To write a complex function, I will probably break it out into a bunch of smaller pure functions. In TDD that may require you to either: 1. Write a larger function that passes the test and break it down. 2. Write a test that validates that the larger function calls other functions and then write tests that define each smaller function.

The problem with these flows is that 1: Causes rework and 2 ends up being like reading a book out of order, you may get to function 3 and realize that function 2 needed additional data and now you have to rewrite your test for 2. Once again rework. I'm sure there are some gains in some spaces but overall it seems that the rework burns those gains off.

tippytippytango · 4 years ago
Exactly, if it made a big difference to profitability then it would be evident in the market place. TDD shops would out compete the ones that don’t use it. This doesn’t seem to happen in the market. What that means, if TDD is a benefit, it is such a small benefit that other factors in the business eclipse its impact.
samatman · 4 years ago
One can enforce the use of TDD through pair programming with rotation, as Pivotal does.

I don't know that Pivotal (in particular) does pair programming so that TDD is followed, I do know that they (did) follow TDD and do everything via pair programming. I'm agnostic as to whether it's a good idea generally, it's not how I want to live but I've had a few associates who really liked it.

klysm · 4 years ago
Wow that sounds absolutely awful. A lot of the work I do is thinking long and hard about what I want my API to look like. It’s an iterative process and I want to be able to throw shit out a lot.

Deleted Comment

cpill · 4 years ago
isn't that what txt coverage is about?
tippytippytango · 4 years ago
Did you mean test coverage? Test coverage tells you the code was tested, but it doesn’t tell you if the programmer used TDD to write the tests.
reggieband · 4 years ago
I could write an entire blog post on my opinions on this topic. I continue to be extremely skeptical of TDD. It is sort of infamous but there is the incident where a TDD proponent tries and fails to develop a sudoku solver and keeps failing at it [1].

This kind of situation matches my experience. It was cemented when I worked with a guy who was a zealot about TDD and the whole Clean Code cabal around Uncle Bob. He was also one of the worst programmers I have worked with.

I don't mean to say that whole mindset is necessarily bad. I just found that becoming obsessed with it isn't sufficient. I've worked with guys who have never written a single test yet ship code that does the job, meets performance specs, and runs in production environments with no issues. And I've worked with guys who get on their high horse about TDD but can't ship code on time, or it is too slow, and it has constant issues in production.

No amount of rationalizing about the theoretical benefits can match my experience. I do not believe you can take a bad programmer and make them good by forcing them to adhere to TDD.

1. https://news.ycombinator.com/item?id=3033446

commandlinefan · 4 years ago
> tries and fails to develop a sudoku solver and keeps failing at it

But that's because he deliberately does it in a stupid way to make TDD look bad, just like the linked article does with his "quicksort test". But that's beside the point - of course a stupid person would write a stupid test, but that same stupid person would write a stupid implementation, too... but at least there would be a test for it.

evouga · 4 years ago
Huh? Ron Jeffries is a champion of TDD (see for instance https://ronjeffries.com/articles/019-01ff/tdd-one-word/). He most certainly wasn't deliberately implementing Sudoku in a stupid way to make TDD look bad!
laserlight · 4 years ago
Top-most comment to the link you provided pretty much explains the situation. TDD is a software development method, not a generic problem solving method. If one doesn’t know how a Sudoku solver works, applying TDD or any other software development method won’t help.
sidlls · 4 years ago
One of the theses of TDD is that the tests guide the design and implementation of an under specified (e.g. unknown) problem, given the requirements regarding the outcomes and a complete enough set of test cases. “Theoretically” one should be able to develop a correct solver without knowing how it works by iterative improvements using TDD. It might not be of good quality, but it should work.

Note: I am quite skeptical of TDD in general.

mikkergp · 4 years ago
>I've worked with guys who have never written a single test yet ship code that does the job, meets performance specs, and runs in production environments with no issues.

I'm curious to unpack this a bit. I'm curious what other tools people use other than testing programatic testing; programatic testing seems to be the most efficient, especially for a programmer. I'm also maybe a bit stuck on the binary nature of your statement. You know developers who've never let a bug or performance issue enter production(with or without testing)?

reggieband · 4 years ago
Originally when I started out in the gaming industry in the early 2000s. There were close to zero code tests written by developers at that time at the studios I worked for. However, there were large departments of QA, probably in the ratio of 3 testers per developer. There was also an experimental Test Engineer group at one of the companies that did automated testing, but it was closer to automating QA (e.g. test rigs to simulate user input for fuzzing).

The most careful programmers I worked with were obsessive about running their code step by step. One guy I recall put a breakpoint after every single curly brace (C++ code) and ensured he tested every single path in his debugger line by line for a range of expected inputs. At each step he examined the relevant contents of memory and often the generated assembly. It is a slow and methodical approach that I could never keep the patience for. When I asked him about automating this (unit testing I suppose) he told me that understanding the code by manually inspecting it was the benefit to him. Rather than assuming what the code would (or should) do, he manually verified all of his assumptions.

One apocryphal story was from the PS1 days before technical documentation for the device was available. Legend had it that the intrepid young man brought in an oscilloscope to debug and fix an issue.

I did not say that I know any developers who've never let a bug or performance issue enter production. I'm contrasting two extremes among the developers I have worked with for effect. Well written programs and well unit tested programs are orthogonal concepts. You can have one, the other, both or neither. Some people, often in my experience TDD zealots, confuse well unit tested programs with well written programs. If I could have both, I would, but if I could only have one then I'll take the well-written one.

Also, since it probably isn't clear, I am not against unit testing. I am a huge proponent for them, advocating for their introduction alongside code coverage metrics and appropriate PR checks to ensure compliance. I also strongly push for integration testing and load testing when appropriate. But I do not recommend strict TDD, the kind where you do not write a line of code until you first write a failing test. I do not recommend use of this process to drive technical design decisions.

Chris_Newton · 4 years ago
You know developers who've never let a bug or performance issue enter production(with or without testing)?

One of the first jobs I ever had was working in the engineering department of a mobile radio company. They made the kind of equipment you’d install in delivery trucks and taxis, so fleet drivers could stay in touch with their base in the days before modern mobile phone technology existed.

Before being deployed on the production network, every new software release for each level in the hierarchy of Big Equipment was tested in a lab environment with its own very expensive installation of Big Equipment exactly like the stations deployed across the country. Members of the engineering team would make literally every type of call possible using literally every combination of sending and receiving radio authorised for use on the network and if necessary manually examine all kinds of diagnostics and logs at each stage in the hardware chain to verify that the call was proceeding as expected.

It took months to approve a single software release. If any critical faults were found during testing, game over, and round we go again after those faults were fixed.

Failures in that software were, as you can imagine, rather rare. Nothing endears you to a whole engineering team like telling them they need to repeat the last three weeks of tedious manual testing because you screwed up and let a bug through. Nothing endears you to customers like deploying a software update to their local base station that renders every radio within an N mile radius useless. And nothing endears you to an operations team like paging many of them at 2am to come into the office, collect the new software, and go drive halfway across the country in a 1990s era 4x4 in the middle of the night to install that software by hand on every base station in a county.

Automated software testing of the kind we often use today was unheard of in those days, but even if it had been widely used, it still wouldn’t have been an acceptable substitute for the comprehensive manual testing prior to going into production. As for how the developers managed to have so few bugs that even reached the comprehensive testing phase, the answer I was given at the time was very simple: the code was extremely systematic in design, extremely heavily instrumented, and subject to frequent peer reviews and walkthroughs/simulations throughout development so that any deviations were caught quickly. Development was of course much slower than it would be with today’s methods, but it was so much more reliable in my experience that the two alternatives are barely on the same scale.

wglb · 4 years ago
I think this whole failed puzzle indicates that there are some problems that cannot be solved incrementally.

Peter Norvig's solution has one central precept that is not something that you would arrive at by an incremental approach.

But I wonder if this incrementalism is essential for TDD.

danpalmer · 4 years ago
Like almost every spectrum of opinions, the strongest opinions are typically the least practical, and useful only in a theoretical sense and for evolving the conversation in new directions.

I think TDD has a lot to offer, but don't go in for the purist approach. I like Free Software but don't agree with Stallman. It's the same thing.

The author takes a well reasoned, mature, productive, engineering focused approach, like the majority of people should be doing. We shouldn't be applying the pure views directly, we should be informed by them and figure out what we can learn for our own work.

discreteevent · 4 years ago
This was the funny thing about extreme programming. I remember reading the book when it came out. In it Kent Beck more or less said that he came up with the idea because waterfall was so entrenched that he thought the only way to move the dial back to something more incremental was to go to the other extreme end.

This took off like wildfire probably for the same reason that we see extreme social movements/politics take off. People love purity because it's so clean and tidy. Nice easy answers. If I write a test for everything something good will emerge. No need for judgement and hand wringing.

But the thing is that I think Kent Beck got caught up in this himself and forgot the original intention. I could be wrong but it seems like that.

ad404b8a372f2b9 · 4 years ago
Increasingly I've been wondering whether these agile approaches might be a detriment to most open source projects.

There is a massive pool of talented and motivated programmers that could contribute to open source projects, much more massive than any company's engineering dept, yet most projects follow a power law where a few contributors write all the code.

I think eschewing processes and documentation in favour of pure programming centered development, where tests & code serve as documentation and design tools, means the barrier to entry is much higher, and onboarding new members is bottlenecked by their ability to talk with the few main contributors.

The most successful open source projects have a clear established process for contributing and a lot of documentation. But the majority don't have anything like that, and that's only exacerbated by git hosting platforms that put all their emphasis on code over process. I wonder whether setting up new tools around git allowing for all projects to follow the waterfall or a V-cycle might improve the contribution inequality.

totetsu · 4 years ago
But we need to use FDD to use the full spectrum of options.
Joker_vD · 4 years ago
The fact that some people really argue that TDD produce better designs... sigh. Here, look at this [0] implementation of Dijkstra's algorithm, written by Uncle Bob himself. If you think that is well-designed (have you ever seen weighted graphs represented like this?) then, well, I guess nothing will ever sway your opinion on TDD. And mind you, this is a task that does have what a top comment in this very thread calls a "well defined spec".

[0] https://blog.cleancoder.com/uncle-bob/2016/10/26/DijkstrasAl...

codeflo · 4 years ago
What the actual fuck… I only got two pages down and already found several red flags that I would never accept in any code review. Not the least of which is that when querying an edgeless graph for the shortest path from node A to node Z, “the empty path of length 0” is the exact opposite of a correct answer.

So thanks for the link, I guess. I’ll keep this as ammunition for the next time someone quotes Uncle Bob.

sushisource · 4 years ago
Damn, indeed. The Uncle Bob people (or, really, any "this book/blog post/whatever says to do technique x" people) are my absolute least favorite. This is a good riposte. Or, alternatively, if they don't understand why it's bad then you know they're a shit coder.
jonstewart · 4 years ago
In my personal experience, TDD helps me produce better designs. But, thinking also helps me produce better designs, too. There's a lot of documentation that Creepy Uncle Bob isn't the most thoughtful person, and I think this blog post says much more about him than about TDD.

The code is definitely a horrow show.

rmetzler · 4 years ago
Can you link to an implementation you would consider great?

I would just like to compare them. I too find Uncle Bobs “clean code” book very much overrated.

My understanding of the “design” aspect of TDD is, that you start from client code and create the code that conforms to your tests. Too often I worked in a team with other developers and I wanted to use what they wrote, and they somehow coded what was part of the spec, but it was unusable from my code. Only because I was able to change their code (most often the public API) I was able to use it.

whimsicalism · 4 years ago
It stores it as a collection of edges? Why not use adjacency list representation?

You iterate through all of the edges every time to find a nodes neighbors?

idk, this code just looks terrible to me.

JonChesterfield · 4 years ago
There's some absolute nonsense in the TDD style. Exposing internal details for test is recommended and bad for non-test users of the interface. Only testing through the interface (kind of the same as above) means tests contort to hit the edge cases or miss them entirely.

The whole interface hazard evaporates if you write the tests in the same scope as the implementation, so the tests can access internals directly without changing the interface. E.g. put them in the same translation unit for C++. Have separate source files only containing API tests as well if you like. Weird that's so unpopular.

There's also a strong synergy with design by contract, especially for data structures. Put (expensive) pre/post and invariants on the methods, then hit the edge cases from unit tests, and fuzz the thing for good measure. You get exactly the public API you want plus great assurance that the structure works, provided you don't change semantics when disabling the contract checks.

rmetzler · 4 years ago
It’s similar in Java, where people often only know about public and private, and forget about package scoped functions. You can use these to test utility functions etc.

The post is weird, I agree with almost everything in the first half and disagreed with most of the second part.

What makes TDD hard for integration testing is that there are no simple readymade tools similar to XUnit frameworks and people need to build their own tools and make them fast.

marginalia_nu · 4 years ago
I've always sort of thought of TDD a bit of a software development methodology cryptid. At best you get shaky camcorder footage (although on closer investigation it sure looks like Uncle Bob in a gorilla suit).

Lots of shops claim to do TDD, but in practice what they mean is that they sometimes write unit tests. I've literally never encountered it outside of toy examples and small academic exercises.

Where is the software successfully developed according to TDD principles? Surely a superior method of software development should produce abundant examples of superior software? TDD has been around for a pretty long time.

gnulinux · 4 years ago
In my current company, I'm practicing TDD (not religiously, in a reasonable way). What this means for us (for me, my coworkers and my manager):

1. No bug is ever fixed before we have at least one failing test. Test needs to fail, and then turn green after bugfix. [1]

2. No new code ever committed without a test specifically testing the behavior expected from the new code. Test needs to fail, and then turn green after the new code.

3. If we're writing a brand new service/product/program etc, we first create a spec in human language. Turn the spec into tests. This doesn't mean, formally speaking "write tests first, code later" because we do write tests and code at the same. It's just that everything in the spec has to have an accompanying test, and every behavior in the code needs to have a test. This is checked informally.

As they say, unittests are also code, and all code has bugs. In particular, tests have bugs too. So, this framework is not bullet-proof either, but I've personally been enjoying working in this flow.

[1] The only exception is if there is a serious prod incident. Then we fix the bug first. When this happens, I, personally, remove the fix, make sure a test fails, then add the fix back.

int_19h · 4 years ago
Of all your tests, what is the proportion of tests that test exceptional code paths vs regular flow?
fsdghrth3 · 4 years ago
I use TDD as a tool. I find it quite heavy handed for maintenance of legacy code where I basically know the solution to the task up front. I can either just rely on having enough existing coverage or create one test for my change and fix it all in one step.

The times I actually use TDD are basically limited to really tricky problems I don't know how to solve or break down or when I have a problem with some rough ideas for domain boundaries but I don't quite know where I should draw the lines around things. TDD pulls these out of thin air like magic and they consistently take less time to reach than if I just sit there and think about it for a week by trying different approaches out.

fiddlerwoaroof · 4 years ago
I’ve worked at a place where we did TDD quite a bit. What I discovered was the important part was knowing what makes code easy to test and not the actual TDD methodology.
twic · 4 years ago
I've worked at three companies that did TDD rigorously. It absolutely does exist.
klysm · 4 years ago
Was it worth it? In what languages?