Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • I'm convinced that there is no "right" solution to this problem. Perl::Critic has the same issues, as does the CPANTS Kwalitee game. Coverity, FindBugs, PMD and CodePro all have the same issues in the Java world. Even the Acid3 browser test had this problem -- IIRC Safari (or was it Opera?) got 100% compliance, but then someone found a bug in the test and Safari lost their 100%.

    Any test where you can achieve a perfect score on a subjective metric is destined to cause "not my fault" failures in the future.

    For my Test::Virtual::Filesystem [cpan.org] package, I introduced a feature where users could declare themselves compliant with a particular version of the test module. Then, I carefully increment the version of each test in the package when I make a new release, and if the test version number is greater than the claimed compliant version number then I make it a TODO test.

    That approach works great for validating POSIX-like filesystems where correctness is quite objective. That approach would probably work for Test::Pod and would partially work for Perl::Critic (make new policies TODO failures == easy; deciding which changes to existing policies should be TODO vs. real failures == hard)