Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Alias (5735)

  (email not shown publicly)

Journal of Alias (5735)

Wednesday November 18, 2009
07:07 PM

The unfortunate demise of the plan

[ #39916 ]

Over the last year, I've seen a disturbing trend on the part of some of Perl testing thought-leaders (ugh... what's a better word for this... Testerati?) to demonise the testing plan.

I thought I'd take a moment to come to the defense of a plan (and why I don't like done_testing() except in a very specific situation).

When I wrote my first CPAN module, and as a result discovered Perl testing, the thing that impressed me most of all was not the syntax, or the fact that tests were just programs (both of which I like).

It was the testing plan that I found to be a stroke of brilliance.

Even though it can be a little annoying to maintain the number (although updating this number sounds like a good feature for an editor to implement) the plan catches two major problems simultaneously.

1. It detects ending, aborting, dieing and crashing tests, even if the crash is instantaneous with no evidence of it happening.

2. It detects running too many tests, too few tests, bad skip blocks, and other soft failures, preventing the need to write tons of explicit list length tests.

It also prevents the need to say when you are done, reducing the size of your test code.

For example, look at the following two code blocks, both of which are equivalent.

This is the new no plan done_testing way.

use Test::More;

my @list = list();
is( scalar(@list), 2, 'Found 2 members' );

foreach ( @list ) {
        ok( $_, 'List member ok' );


And this is the original way.

use Test::More tests => 2;

foreach ( list() ) {
        ok( $_, 'List member ok' );

The difference is stark. In the original, I know I'm doing 2 tests so I don't need to test for the size of the list from list(). If list returns 3 things, or 1 thing, the test will fail.

There is one clear use case for done_testing (and btw, who decided that done_testing was a great name for a function, what ELSE are you going to be done with? Surely just done() is good enough) and that is when the number of elements returned by list() is unknowable.

Even in this case, I still prefer explicitly telling Test::More that I have NFI how many tests will run. That at least gives it some certainty that I have actually started testing.

The new paradigm of not using a plan is far far messier, for no obvious benefit I can see and open up new areas with bugs can creep in far too easily.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • It seems to me that the example you give is ultimately a poor test. You’re relying upon an implicit feature of the testing interface to test for explicit behaviors of your code. As a person new to your project how I am supposed to know that list() is only ever supposed to return two elements? Sure the tests fail but how do I know that the tests were correct in the first place?

    By making an explicit test for the explicit behavior, I can communicate to someone else that I’m expecting list() to onl

    • I agree with your assessment of no_plan, at least when you used it you were explicitly saying there was no plan...

    • Agreed. There are times when you need a plan for sure (my current project for example, where I test for lots of different types of exceptions) but in general I find it thought cruft. Also note that no_plan doesn't (I think) detect die'ing halfway through.
      • Which is why I've happily moved to done_testing for future code.
      • no_plan won't detect early termination, but Test::Harness will note the non-zero exit status on a die.

        • It is more common than I'd like to see END and DESTROY blocks do something that clears the exit status of a Perl program on its way out the door.

          In fact it is common enough that I'd prefer not to rely on it to detect abnormal ending of a test suite. Particularly since many test suites like to do cleanup when they are done.

  • The Aegis software management tool has been around for a long time [], but I strongly suspect that part of the reason it never became very popular is because it enforced a particular belief on users. For example, your code is always in one of multiple stages and for the "being developed" state, you cannot check in unless you have new tests and your code must pass those tests. There are numerous other states and numerous other preconditions which developers much satisfy before they can move the code to the n

    • Are we talking about organised religion or programming? I find the language in your last couple of comments… creepy.

      • The religious allusions were deliberate. The dogmatism of those who insist that we "must have a plan" put people off in the same way that dogmatists for anything will put off certain parts of the population. I don't care for dogmatism because it's often a fancy word for "we think we understand this, so we don't need critical thought". Since dogmatism is often accompanied by a near religious fervor, I thought it was OK to continue with the metaphor.

        Sorry if I was a little too zealous in that. Obviously i

        • You lost me when you started talking about bringing people into the fold and such. I want to teach trade-offs and how to weigh them, not convert people to a doctrine – any doctrine.

          • My apologies. I meant that strictly as tongue-in-cheek. I should hope that that those who know me would appreciate the irony of me using religious allusions. I'm not serious, dude :)

    • We've never insisted on a plan, you just had to explicitly say that you didn't have one.

      What I'm seeing now is the opposite, you (and others) seem to be actively encouraging people to NOT use plans. Or at least, that is the impression I get.

      People have never HAD to use DBI placeholders either, but the default documentation comprehensively refers to it.

      • Not using DBI placeholders is far more serious than not using a plan. It's can leave you wide open to serious security holes. Lack of a plan, however, while risky, is far less risky.

        When I'm moving, my friends and I are carrying stuff into my house and I leave the door unlocked. I lock the door when I'm done. Similarly, when I write tests, I set them as no_plan and add the plan when I'm done. What I want to do is minimize the accounting when writing tests and when the developer is done, they lock the d

  • I like to use the argument done_testing accepts that lets you say how many tests you expected to have run at that stage.

    Its like specifying a plan, except you can do so pro grammatically after the fact, which eliminates the need for tedious manual counting.

    All you need to know is how many tests occur within a given block of code and increment your counter respectively.

    use Test::More;

    my $t;

    foreach ( list() ) {
      ok( $_, 'List member ok' );

    $t+=3; # list called the second time

    • /me-- # tiny bloody text box ==> me splits a word causing bad grammars.
    • There's surely a better way to get the same effect.

      Switch to Test::Class. I've an []extensive article on its use and best practices []. When nested TAP is finally stable, Adrian Howard has a new version available which utilizes it. It's much cleaner.

      • I've found that that Test::Class testing code is significantly harder to maintain that the "bunch of Perl scripts" of the regular way.

        That's just me though...

    • Yes, this seems the way to go to me, especially since I have been writing my tests like

          use Test::More;

          my $tests;

          BEGIN { $tests += 2 }
          for (list()) {
              ok $_, "List member ok";

          BEGIN { plan tests => $tests }

      for quite a while now. AFAIC done_testing($tests) simply removes the need for the ugly BEGIN blocks.

      The other thing I quite like about done_testing is that is removes the need for SKIP: bl

  • And of course now I'm getting stuff like this when installing new CPAN modules with my old Test::* stuff: Bareword "done_testing" not allowed while "strict subs" in use at t/51_since.t line 57.
    • That is just the regular kind of stupid, forgetting that because you are using new syntax, you need new versions in your deps.

      • It should be fairly easy to write a detector for too, just look for done_testing in modules and compare it to their META.yml.

        • This is where it comes in extra handy that done_testing has a stupidly (or so it would seem) long name. :-)

  • Even though it can be a little annoying to maintain the number (although updating this number sounds like a good feature for an editor to implement)

    Funny, I just added a feature to do that for emacs, it's in the latest release of my perlnow.el [].

    If you run a *.t file from inside of emacs using perlnow.el, you can then do a "perlnow-revise-test-plan" which changes the plan to match the number of tests you actually ran.

  • A little late to the party, but aren't plan and done_testing simply orthogonal? The numeric plan would be used when you need to make sure something fired a specific number of times[1]. done_testing would be used when you simply need to know that the entire test got executed end-to-end without exiting/segfaulting somewhere in the middle[2] (exceptions would be caught by the harness as noted above).

    Is there *any* conceivable reason to have to know how many tests are in fact being executed in [2]?