Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Question: How do I test something graphical? For example, I have routines that are generating charts and graphs in PNG format.

    Answer: One way would be to pre-generate "known good" charts and graphs, save them in a test_data directory and, run your routines and see if the output files are the same.

    Question: How would I test a GUI interface for something?

    Answer: I've struggled with this and for the most part, I don't think there are good, generic answers. One suggestion that I've heard is to have the system take screenshots of the GUIs and use the method listed in my previous answer. This is very fragile and non-portable, however. Further, a decent answer would vary depending upon how you create your GUI (Template Toolkit front ends would be tested much differently from a Tk framework).

    Question: How do I test routines with side effects? (i.e., routine prints to the screen rather than or in addition to returning a value).

    Answer: one way is to build hooks into your system:

    # somewhere in package Foo:
    $self->_print($some_text);

    And in your test suite:

    {
        my $message;
        local *Foo::_print = sub { $message = shift };
        my $result = $object->method;
        ok($result, 'Calling method() should have a result');
        like($message, qr/$some_text/, '... and it should also print some stuff');
    }

    Not only does it make it easy to trap the data, you later have a very easy way of logging, trapping, or munging those _print messages.

    As for the last technique, I used to hate to do that because it seemed "wrong" to build testing hooks into the code. As times goes on, though, I've found that it makes testing easier and I've never found an actual drawback.

    • What we did at an old job was get a known good image (chart, graph, or even a snapshot of the desktop).

      Then create a checksum (md5 is kind of small but I liked to use sha1). Then you just need to create a checksum on the test image and compare checksums, and then you could flag an image for human review.

      There was some other super simple things I came up with as well for images and rough UI testing.
    • I've occasionally used another technique for testing the text output of legacy scripts to prevent regression while refactoring or fixing bugs. Redirect all output to a file, for both the new and old versions of the script, and then use "diff" to detect differences. This is certainly worse than fully automated testing, but much better than fully manual testing.

      Redirecting "all output" is slightly more challenging if the legacy code doesn't have the hooks Ovid recommends, but not impossible. Although "pri