Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

TeeJay (2309)

TeeJay
  (email not shown publicly)
http://www.aarontrevena.co.uk/

Working in Truro
Graduate with BSc (Hons) in Computer Systems and Networks
pm : london.pm, bath.pm, devoncornwall.pm
lug : Devon & Cornwall LUG
CPAN : TEEJAY [cpan.org]
irc : TeeJay
skype : hashbangperl
livejournal : hashbangperl [livejournal.com]
flickr :hashbangperl [flickr.com]

Journal of TeeJay (2309)

Thursday August 17, 2006
09:04 AM

Now that's what I call testing volume 64

[ #30666 ]

So for this contract I'm working on data feeds, that are fairly complex - One of the other contractors did most of the work on buidling a parser using HOP crack, but I'm writing the tests.

Now the message format is pretty variable, but mostly tokenisable, with some freeform. The number of possible valid variations for a given place and date are huge, not to mention each subtype of the message being slightly different and having slightly different rules.

The thing is there is no official validity test for these messages, there is a huge book of rules that are for pilots reading these messages, but very little of it translates directly to machine recognition.

So today I'm doing the same test as the offical aviation met offices : chuck a days worth of data at your parser and see what looks broken.

Luckily for me instead of an Access Database, An excel spreadsheet, some macros and a dozen pairs of eyes (the recomended way to test the feeds), I have Test::More and a nice script that will run 10s of 000s of tests (1 per field per message), providing me with regression testing against what the old parsers would extract from given data.

Next step is to extract some more recent control data (the 100s of MB of data I've just been removing duplicates from and testing with are about 3 years old), and build a test against that as the standards have changed a bit in the last couple of years and now stations will provide data in whicever format they last read, whether 6 years ago or last week.

Then onto the specific edge case testing, and handling partial success in parsing data.

Phew - thats a lorra lorra tests.

Once that's done I'll be running the parsers side by side with the same data and comparing the differences in the db (and frontend).

A.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Maybe Test::LectroTest [cpan.org] could be useful for your use case? Mind, my suggestion might be garbage, but that’s what “statistical testing” like you’re having to do generally reminds me of.

    • Doesn't look much help at first glance.

      I'm pretty happy with what I've got now - I now have the new parser working better than the previous one and passing all the original test sample.

      Now I just need to build a 2nd test sample with more recent data and hand write documented edge-case tests.

      The 2 test samples aren't documented because there are simply too many variations on the data, they are really more regression testing than unit testing but at a unit-test level
      --

      @JAPH = qw(Hacker Perl Another Just);
      print reverse @JAPH;