So for this contract I'm working on data feeds, that are fairly complex - One of the other contractors did most of the work on buidling a parser using HOP crack, but I'm writing the tests.
Now the message format is pretty variable, but mostly tokenisable, with some freeform. The number of possible valid variations for a given place and date are huge, not to mention each subtype of the message being slightly different and having slightly different rules.
The thing is there is no official validity test for these messages, there is a huge book of rules that are for pilots reading these messages, but very little of it translates directly to machine recognition.
So today I'm doing the same test as the offical aviation met offices : chuck a days worth of data at your parser and see what looks broken.
Luckily for me instead of an Access Database, An excel spreadsheet, some macros and a dozen pairs of eyes (the recomended way to test the feeds), I have Test::More and a nice script that will run 10s of 000s of tests (1 per field per message), providing me with regression testing against what the old parsers would extract from given data.
Next step is to extract some more recent control data (the 100s of MB of data I've just been removing duplicates from and testing with are about 3 years old), and build a test against that as the standards have changed a bit in the last couple of years and now stations will provide data in whicever format they last read, whether 6 years ago or last week.
Then onto the specific edge case testing, and handling partial success in parsing data.
Phew - thats a lorra lorra tests.
Once that's done I'll be running the parsers side by side with the same data and comparing the differences in the db (and frontend).