Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Alias (5735)

Alias
  (email not shown publicly)
http://ali.as/

Journal of Alias (5735)

Monday December 10, 2007
10:09 PM

The growing problem of core modules with non-trivial FAILs

[ #35079 ]

For some reason I don't quite understand yet, I've noticed an increasing number of core, dual life or massively-depended on modules move from having little to no CPAN Testers FAIL results to having substantial numbers.

The typical rate seems to be about 10% failures, and the list of modules is quite scary.

Scalar::Util, File::Temp, File::Spec, Exporter, base, Test::More, clone, version, Path::Class, YAML and more...

I know in at least a few of these cases they were largely FAIL-free until recently.

This problem would seem to be compounded by the situation most of the authors are in. Most are extremely busy, or have maintained these modules for a long period of time, and don't necessarily have time to push them to 100% PASS any more.

Because of recursion, failures in these modules have a HUGE impact on the userbase.

I have a small thread of work I've been trying to find time-slices for to find a way to measure this more accurately, to generate weightings for modules based on dependencies and such.

Hopefully we can then apply these weights to things like CPAN Testers results to find the "worst" bugs and modules from the perspective of the entire CPAN.

At the same time, the next phase of my own module maintenance is to ignore RT for a while and focus on CPAN Testers, to get everything up to 100% PASS.

So while there may be bugs in the code, but at least all the bugs that are tested for are confirmed to be fixed across the board.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • When I look at http://cpantesters.perl.org/show/Exporter.html [perl.org] I see FAILs that were corrected in 5.61 and NAs because actual Exporter code does not support (out of the box) pre-5.6 perls.
    • Just noticed that as well, and yet the CPAN dependencies site says there's fails...
  • Those two things have been the cause for a bunch of my modules recently. I don't bother trying to make anything work with 5.005 any more, so that's one problem. I've also gotten a rash of weird failures from testers where the failure was clearly a problem of their build environment. It'd be cool if testers could delete failure reports some how.
    • If your module was tested on 5.005, it's only because your Makefile.PL said (by omission) that it was compatible with 5.005.

      If your module DOESN'T support 5.005, then you should be reporting that.

      Then CPAN Testers won't test the module, and you won't accumulate FAIL reports.
      • A little bit of handholding would be much appreciated...

        Can you tell exactly what line we would need to add in Makefile.PL to achieve this?
        • Can you tell exactly what line we would need to add in Makefile.PL to achieve this?

          Just include use 5.006; and the toolchain will understand the distribution is not for pre-5.6 perls. Someone has said that this use was a 5.6 thing, but I've seen it work ok with 5.5. I don't know if even older perls would understand it as well or if they would need a more barroque thing like:

          BEGIN { require 5.006; }

          • If a dependency of a module is declared within the dependency to be incompatible with the perl version installing the module, does the installer get a warning during the typical "perl Makefile.pl; make; make test; make install"?
            • As far as I know, the installer will try to get away without the successful installation of the dependency, hoping for the best. If the best doesn't happen, the failed test will appear as UNKNOWN in that case. So it correctly won't add it to the row of FAILed tests.
          • Actually, I also thought that "use 5.xxx" is a new thing, but it works at least with 5.004, maybe also with older perl versions. And there are for sure no testers around who use something older than 5.005.
          • Thanks, a Google search [google.com] reveals that use VERSION; is indeed quite common in Makefile.PL files on CPAN.

            I wasn't sure it was the right way to do it.
  • I think you should take a closer look at the reasons for the fails, e.g. by looking at the CPAN Testers Matrix [radzeit.de] to find patterns in the failures.

    For the mentioned distributions it looks like:

    • Scalar-List-Utils: FAILs only with devel perl
    • YAML: mostly only devel and old perls have FAILs
    • File-Spec, Exporter, base: looks OK
    • Test-Simple: granted, there are some unexpected red spots
    • File-Temp: 0.18 completely OK, new problems with 0.19

    I also don't think you should target for 100% PASS. There are always pro

    • I'm not sure I like the idea of retracting reports, because who is to say what is invalid?

      I know some situations where authors have said reports are invalid for things like not working with Perl 5.005...
      • If you look at the Tk-804.027 reports, then you see a lot of FAIL reports which are sort-of invalid: testers who don't have a running X server, hence almost all tests fail. Sure, the test suite could check first if there's a running X server (in fact, this is done for Tk-804.028-tobe). Well, now it's more work for me to find the legitimate reports.

        Maybe not retracting reports, but have some means of commenting them would be enough?

  • Over the past year, I've tested hundred of modules with bleadperl, and I have to say that quality has actually improved dramatically over the past year. I agree that I had dozens of modules failing when I started. But by opening bug reports, having Andreas find the root cause change if it was a new failure, and having developers who care about their modules really made a huge difference in improving overall module quality. Are there still modules that fail? Yes. There will always be, but I can say that