Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Ovid (2709)

  (email not shown publicly)
AOL IM: ovidperl (Add Buddy, Send Message)

Stuff with the Perl Foundation. A couple of patches in the Perl core. A few CPAN modules. That about sums it up.

Journal of Ovid (2709)

Monday October 26, 2009
08:02 AM

Using subtests

[ #39802 ]

In writing Pod::Parser::Groffmom, I decided to start using the new subtest feature in Test::More. Since I added that, I may as well eat my own dog food.

Why would you want subtests? As test suites grow in size, you often see stuff like this:

    diag "Checking customer";
    ok my $customer = Customer->new({
      given_name  => 'John',
      family_name => 'Public',
    }), 'Creating a new customer should succeed';
    isa_ok $customer, 'Customer';

    can_ok $customer, 'given_name';
    is $customer->given_name, 'John',
      'given_name() should return the correct value';
    # ... and so on

Programming like this is sometimes useful when we want to:

  1. Locally override a subroutine, method or variable.
  2. Create new variables without them leaking to a file scope.
  3. Group a bunch of tests.

Points 1 and 2 are obvious, but what about 3? A sure sign of a desire for grouping comes when you see test output like this (from my Sub::Information tests):

ok 5 - Sub::Information->can('name')
ok 6 - ... and its helper module should not be loaded before it is needed
ok 7 - ... and it should return the original name of the subroutine
ok 8 - ... and its helper module should be loaded after it is needed

That's an example of writing tests in a narrative style (something chromatic taught me) so that the output can be (somewhat) human-readable. It's also a case of me writing a test in such a way that I have four assertions logically grouped together to test the behavior of a single feature (think "xUnit"). By grouping tests this way and by encapsulating our scope, we can often more easily refactor our tests in a way that makes sense. So I decided to use subtests. Here's what it looks like:

#!/usr/bin/env perl

use strict;
use warnings;

use Test::Most tests => 5;
use Pod::Parser::Groffmom;

my $parser;

subtest 'constructor' => sub {
    plan tests => 2;
    can_ok 'Pod::Parser::Groffmom', 'new';
    $parser = Pod::Parser::Groffmom->new;
    isa_ok $parser, 'Pod::Parser::Groffmom', '... and the object it returns';

subtest 'trim' => sub {
    plan tests => 2;
    can_ok $parser, '_trim';
    my $text = <<' END';

this is

    is $parser->_trim($text), "this is\n text",
      '... and it should remove leading and trailing whitespace';

subtest 'escape' => sub {
    plan tests => 2;
    can_ok $parser, '_escape';
    is $parser->_escape('Curtis "Ovid" Poe'), 'Curtis \\[dq]Ovid\\[dq] Poe',
      '... and it should properly escape our data';

subtest 'interior sequences' => sub {
    plan tests => 6;
    can_ok $parser, 'interior_sequence';

    is $parser->interior_sequence( 'I', 'italics' ),
      '\\f[I]italics\\f[P]', '... and it should render italics correctly';
    is $parser->interior_sequence( 'B', 'bold' ),
      '\\f[B]bold\\f[P]', '... and it should render bold correctly';
    is $parser->interior_sequence( 'C', 'code' ),
      '\\f[C]code\\f[P]', '... and it should render code correctly';
    my $result;
    warning_like { $result = $parser->interior_sequence( '?', 'unknown' ) }
    qr/^Unknown sequence \Q(?<unknown>)\E/,
      'Unknown sequences should warn correctly';
    is $result, 'unknown', '... but still return the sequence interior';

subtest 'textblock' => sub {
    plan tests => 2;
    my $text = <<' END';
This is some text with
  an embedded C<code> block.
    my $expected = <<' END';
This is some text with
  an embedded \f[C]code\f[P] block.

    can_ok $parser, 'textblock';
    eq_or_diff $parser->textblock( $text, 2, 3 ), $expected,
      '... and it should parse textblocks correctly';

(Note that the top level plan only lists five tests because each subtest is counted as one test)

And the output:

$ prove -lv t/internals.t
t/internals.t ..
    ok 1 - Pod::Parser::Groffmom->can('new')
    ok 2 - ... and the object it returns isa Pod::Parser::Groffmom
ok 1 - constructor
    ok 1 - Pod::Parser::Groffmom->can('_trim')
    ok 2 - ... and it should remove leading and trailing whitespace
ok 2 - trim
    ok 1 - Pod::Parser::Groffmom->can('_escape')
    ok 2 - ... and it should properly escape our data
ok 3 - escape
    ok 1 - Pod::Parser::Groffmom->can('interior_sequence')
    ok 2 - ... and it should render italics correctly
    ok 3 - ... and it should render bold correctly
    ok 4 - ... and it should render code correctly
    ok 5 - Unknown sequences should warn correctly
    ok 6 - ... but still return the sequence interior
ok 4 - interior sequences
    ok 1 - Pod::Parser::Groffmom->can('textblock')
    ok 2 - ... and it should parse textblocks correctly
ok 5 - textblock
All tests successful.
Files=1, Tests=5,  2 wallclock secs ( 0.03 usr  0.01 sys +  0.29 cusr  0.07 csys =  0.40 CPU)
Result: PASS

That quickly showed an annoyance. Using subtests surprised me even though I created them! Specifically, having to specify a plan for every subtest is frustrating, but I don't know the number of tests before I've written them. Thus, I have to use no_plan for each subtest and then switch it afterwards.

I think a better strategy is clear: if no plan is included in a subtest, an implicit done_testing should be there. Thus, you can write subtests without specifying a plan but still have a bit of safety. I think I know how to implement this and it would make test author's lives simpler.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • This was a useful example, Ovid.

    I agree being able to skip declaring a plan inside of a subtest would be a nice feature.

  • I tried using this today and noticed another detail: The subtest name is printed *after* the sub-test output. I expected that it would be printed *before* the output, as it appears in the test.

    I don't know if this change is possible with the TAP structure or not.

    • You'd have to block the entire subtest output until the final test is done and then print the summary line followed by the subtests. If you have long-running tests/nested subtests, your test suite would repeatedly appear to freeze.

      Or are you just thinking the name should be printed first and then reprinted after the test, along with the ok/not ok result?