Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

chromatic (983)

chromatic
  (email not shown publicly)
http://wgz.org/chromatic/

Blog Information [technorati.com] Profile for chr0matic [technorati.com]

Journal of chromatic (983)

Monday March 09, 2009
04:03 PM

Why There Are No Conclusive Studies about TDD Efficacy

[ #38622 ]

Ovid doesn't always use TDD, even for production-style programming. (That's non-exploratory programming.) He lamented that:

One problem I have with the testing world is that many "best practices" are backed up with anecdotes....

This will continue to be true until it's possible to measure the cost of creating and maintaining a piece of software throughout its lifecycle. If TDD costs me 25% of my initial development time but saves multiple day-long debugging sessions over five years, I'll stick with TDD. (I can give you plenty of anecdotes where the lack of tests cost lots of debugging time, including one amusing-only-in-retrospect semi-predicate problem where I reverted a checkin from another developer who reverted my checkin until we both realized that we had to account for all three termination conditions, and he was fixing one bug and I another.)

Of course, those studies still won't be valid until there's a way to get repeatable and condition isolated results out of the studies, but that requires turning programming into a mechanical practice devoid of all creativity and repeatable ad infinitum and unmodified in laboratory conditions.

I can only give you the best advice I have. In the past decade, I've become a much better programmer in part due to learning how to use TDD effectively. (I've also had a decade more practice, though.)

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Frankly, I've no problem with you using Anecdote Driven Development. I have every problem with testing advocates tell me I should be relying on their anecdotes.

    This will continue to be true until it's possible to measure the cost of creating and maintaining a piece of software throughout its lifecycle.

    What you're actually saying here is "it's very, very hard to accurately measure this cost, so ...". I do agree with the "very hard" bit (well, I might say "impossible"), but the same is true for just about any complicated research out there. It's not the case of pouring A into B and determining if C results. Studies in a variety of fields require

    • Studies in a variety of fields require acknowledging that things can't be done perfectly, but we can do enough to get a sense of how things actually work to make sane recommendations.

      You might as well quote a study that says "Berlitz German students translate the libretto of The Magic Flute twice as fast with 80% fewer errors than students of other language learning schools." ETOOMANYUNCONTROLLEDVARIABLES.

      • I can't tell what you mean due to your sarcasm. I know you're not saying "the benefits of TDD are too hard to study, so we won't even try" because that would be an idiotic statement to make and you're not an idiot. So what are you trying to say? Are you saying that the efficacy of TDD is beyond reason and we must rely on anecdote? Again, I know you're not saying this, but you're not saying much else, either :)

        In short, if you're going to make cause and effect assertions, I want to hear something backing

        • In short, if you're going to make cause and effect assertions, I want to hear something backing that up other than anecdotes.

          I believe you're in the wrong line of work then. How would you even design such a study?

          If you compare two programmers, one using TDD and one not, you have to account for productivity, knowledge, experience, and creative differences between them.

          If you compare two teams, one using TDD and one not, you have to account for the same, multiplied by at least the number of programmers on

          • So I can now write a new blog post entitled "chromatic admits there's no evidence for testing's effectiveness" :)

            • If you want people to think you have the reading comprehension of the average programming.reddit.com commenter these days, feel free. (I know you're teasing -- your epistemology isn't broken -- but subtlety is deader on the Internet than in modern literature.)

              • For everyone else following at home, my response was teasing too. Ovid's in no danger of misunderstanding what I wrote.

          • If you compare two programmers, one using TDD and one not, you have to account for productivity, knowledge, experience, and creative differences between them.

            Sociologists deal with that sort of thing all the time. Their methods aren't perfect, but they do manage to get somewhere.

            The secret is that you don't compare two programmers; you compare two thousand.

          • I believe you're in the wrong line of work then. How would you even design such a study?

            What I would do, is I would consult some social scientists. We are in the wrong line of work to be ruling out the possibility of doing an intelligent study of programming methodologies.