Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

samtregar (2699)

samtregar
  (email not shown publicly)
http://sam.tregar.com/

Journal of samtregar (2699)

Thursday November 13, 2003
02:24 PM

What's the big problem with Test::More's no_plan?

[ #15751 ]
For the second time I've recieved a patch for a module of mine changing:

use Test::More qw(no_plan);

To:

use Test::More tests => 218;

From my perspective this just makes adding tests harder. Every time I add a test to the bottom of the file I have to remember to scroll all the way t o the top and update the magic number.

What am I missing?

-sam

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • If you have loops, skips, or possible early exits, it's helpful to know how many tests you expect to run, in case you run more or fewer.

    It's also possible to avoid loops and skips as well as most early exits, so if you put a dependency on Test::Harness 2.x or greater, you're probably okay with no_plan.

    • I can't remeber the last time I put a call to exit() in a module or a test... What's the problem with loops and skips? I've used those with no_plan and not had trouble.

      -sam

      • Re:Planning X Tests (Score:4, Informative)

        by Ovid (2709) on 2003.11.13 15:41 (#25757) Homepage Journal

        If you're testing a module that has an unexpected exit() in it, you'll be grateful when your test count shows that you didn't run all of the tests.

        As for loops, if you have tests in loops, if the loop runs more or fewer times, the test count will catch that and again you get a failure report. When this happens, I usually find it's a bug in the test, but it's nice to know.

        As for skip blocks, those can be tricky. Consider this:

        SKIP: {
            skip "No internet connection", 23, unless $net_connection;
            # some tests
        }

        Now what happens if you've been running that repeatedly without a net connection but only have 21 tests that you've skipped? The first time you run this with a net connection, even if all tests pass, you still have the test program failing due to a poor test count. Again, this is usually due to a bad test count, but if it's not, you'll be grateful.

        • Unconvinced. It's easy to add a test for how many times a loop ran (i.e. is($loop_count, 10);). As for SKIP, I usually just pass skip an arbitrary value since I don't care how many tests skipped, just that some did.

          How many real bugs has using a static plan caught for you? How many false alarms?

          -sam

        • If you're testing a module that has an unexpected exit() in it, you'll be grateful when your test count shows that you didn't run all of the tests.

          People often bring this up. I always point out the simple solution: override CORE::exit in Test::More to trap exit() calls. Oddly enough nobody's sent in a patch for that yet. I've gotten lots of schemes to change the way plans work, though. :(

          Loops, as pointed out, are often based on lists who's length are not fixed. Even if they are of a fixed width

      • The point is not to test only things you expect to fail. If you're looping over something not under control of the test, say, the number of files reported found by File::Find, the number of elements exported by a module, it's really handy to know if you have more or less than you expect.

        There are often other ways to test this, but most of them involve keeping some sort of magic number in your test anyway.

        You may never be bitten by this. I've ran into trouble and think it's worth my time to use a test

        • Some magic numbers are more magic than others. I don't mind updating this line when I add a new method:

          is($method_count, 10);

          But hiding that test up in the plan statement, along with every other counted loop... That seems like deep magic to me. And what a pain to debug when it fails too! How do you know which part of your test file ran too long or too short?

          -sam

  • It's a test (Score:4, Informative)

    As I say in one of my talks [petdance.com], having the correct number of tests is itself a test.

    On my todo list for Test::Harness I have the ability to optionally require a plan. Right now, I don't have a way to enforce that every test in my project has a proper plan.

    Sure, it means you have to manually change the test count, but it also guarantees that you won't inadvertantly change the number of tests without knowing it.

    --

    --
    xoa

    • There must be a better way to make sure the number of tests remains constant as long as the test file's mtime doesn't change... The only hard part is finding a place to store the data between runs. Hmmm...

      -sam

    • As I say in one of my talks, having the correct number of tests is itself a test.

      As I say, you're a nutter plan nazi. ;P

      For those who don't know. I think the plan is an obsolesent idea that should fade away. Andy thinks plans are really important should be made even more sophisticated. We each control one half of the Perl testing equation (Test::Builder/More on one side, Test::Harness on the other) which is a Healthy Thing from a checks and balances PoV.

      But you're still nuts. :)

      • But you're still nuts. :)

        Yeah, well, your mama writes COBOL! :-)

        My big concern about not using plans is that if something doesn't happen right and the wrong number of tests is run, it's just ignored. chromatic's mention of loops is a good example, and exiting early is another.

        But you know all this already. This is just for the benefit of those watching at this point. :-)

        --

        --
        xoa