Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Well if nobody tells me about these things I can't possibly do anything about them...

    This seems like a side issue to the Fail 100 notification. CPAN testers already provide many ways to track distribution test failures, including e-mail digests and RSS feeds.

    In the IO::Zlib case, there are:

    It seems that more CPAN authors should be mad

    • Despite all this information, the author still felt like he wasn't told.

      To prevent problems of being horribly spammy, all our current CPAN-wide mechanisms don't really take the initiative to reach out to the author.

      And if they do, they don't really convey the level of urgency of the problem.

      What I'm hoping we can do in this case is to provide an extremely low-volume mechanism that you as a normal CPAN author will never see. For example, POE has never appeared on the FAIL 100 list since I started tracking it

      • For example, we could get commitments from the CPAN Testers to prioritise testing around this #1 position module

        Do be careful to avoid self-reinforcing bias when you do this.

        • I actually meant more in terms of having the CPAN Testers looking at new releases faster, or having them commit to providing better levels of direct access to their hosts.

          The self-reinforcing bias is interesting though, because in a sense it may be a positive thing.

          If anything making it to the #1 position is then subject to even more intense examination that boosts it's score higher, this not only providing more data, but it helps expose more edge cases in what might be a quite edgy module anyway.

          So once you clean up the module for the next release, you stand a better chance of not reappearing on the list in the future, compared to a situation in which you fix one bug and release to hit the reset counter on CPAN Testers only to slowly drift to the top of the list again.

          Remember, the goal of the FAIL 100 list is not to judge modules as being inherently good or bad, it's to identify the places in which we get the maximum benefit for our maintenance time.

          Now if this were a judgement call, a way of placing inherent value on the modules (such as the Kwalitee metrics) then I think this bias would be a bigger risk.

          • I was too terse, sorry.

            What I meant by self-reinforcement referred to the ranking, not the extra scrutiny. That is, the extra scrutiny from being at #1 is good – but be careful about whether/how you weigh that extra scrutiny in the next ranking recalculation. Otherwise, just making #1 may increase the chances of staying there for no other reason than the extra scrutiny (which is only due to making #1) causing extra FAILs that would not have turned up otherwise – fortifying the position against m

            • I see.

              In this case that isn't a problem, because of the way that the ranking is calculated.

              Only the most recent production release is counted, so as soon as #1 does a production release they get their score reset.

              Also, I've noticed that the rankings follow a power law anyway, so even if it does undergo some extra scrutiny that's ok, because it took a hell of a lot for it to get that high in the first place.