Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Alias (5735)

Alias
  (email not shown publicly)
http://ali.as/

Journal of Alias (5735)

Monday July 13, 2009
11:34 PM

The email rules I've decided to go with for the Top 100

[ #39285 ]

Thanks for everybody's input on my previous post.

In addition to the comments, I received another very interesting data point from an email with Tom Hughes, the IO::Zlib maintainer (currently holding #1 on the FAIL 100 list).

> It's showing up on my graph-weighted FAIL tracker as the number one
> source of problems at the moment.
 
Well if nobody tells me about these things I can't possibly do anything about them...

This email response is great, because it demonstrates an important factor in maintainership.

The people that don't talk to you are just as important as those that do.

It's very common in Open Source technical flame wars to see comments like the following.

"How can we possibly be expected to support people that never talk to us"

And it's true, you certainly can't provide a truly personal level of support, and fix the bugs that are specific to them.

But when it comes to specific design decisions where you need to choose one way or the other, it's still extremely important to weight the benefits and costs to your entire user base equally, especially the people that are too busy, too low-skilled or just too shy to speak for themselves.

People that respond are going to be skewed to the people with the highest level of interest, and decisions on the level of control and freedom you allow for those people needs to be treated orthogonally to decisions you make on the DEFAULT behaviours that people without enough time or knowledge to contribute will have forced on them.

So in light of the mixed responses from the comments, and the note from Tom, I'm planning on going with the following email rules.

1. Only the owners of the top 10 FAIL modules will ever be emailed.

2. Initially, I'll be emailing weekly. That might be too often, we'll see.

3. I shall attempt to track your position in the Top 10 and ONLY send a new mail if your position on the list increases. I'm not sure how useful this will be in practice, but we'll see how it goes.

4. I'll look into sucking in the email preferences from CPAN Testers, so if you have set emails to be ignored there, you'll also be ignored here.

5. If you make it all the way to #1, I'm going to ignore your preferences and email you anyway (once of course, since you can't get a position any higher).

I'll kick this off in a few weeks, and we'll see how it goes.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Well if nobody tells me about these things I can't possibly do anything about them...

    This seems like a side issue to the Fail 100 notification. CPAN testers already provide many ways to track distribution test failures, including e-mail digests and RSS feeds.

    In the IO::Zlib case, there are:

    It seems that more CPAN authors should be mad

    • Despite all this information, the author still felt like he wasn't told.

      To prevent problems of being horribly spammy, all our current CPAN-wide mechanisms don't really take the initiative to reach out to the author.

      And if they do, they don't really convey the level of urgency of the problem.

      What I'm hoping we can do in this case is to provide an extremely low-volume mechanism that you as a normal CPAN author will never see. For example, POE has never appeared on the FAIL 100 list since I started tracking it

      • For example, we could get commitments from the CPAN Testers to prioritise testing around this #1 position module

        Do be careful to avoid self-reinforcing bias when you do this.

        • I actually meant more in terms of having the CPAN Testers looking at new releases faster, or having them commit to providing better levels of direct access to their hosts.

          The self-reinforcing bias is interesting though, because in a sense it may be a positive thing.

          If anything making it to the #1 position is then subject to even more intense examination that boosts it's score higher, this not only providing more data, but it helps expose more edge cases in what might be a quite edgy module anyway.

          So once yo

          • I was too terse, sorry.

            What I meant by self-reinforcement referred to the ranking, not the extra scrutiny. That is, the extra scrutiny from being at #1 is good – but be careful about whether/how you weigh that extra scrutiny in the next ranking recalculation. Otherwise, just making #1 may increase the chances of staying there for no other reason than the extra scrutiny (which is only due to making #1) causing extra FAILs that would not have turned up otherwise – fortifying the position against m

            • I see.

              In this case that isn't a problem, because of the way that the ranking is calculated.

              Only the most recent production release is counted, so as soon as #1 does a production release they get their score reset.

              Also, I've noticed that the rankings follow a power law anyway, so even if it does undergo some extra scrutiny that's ok, because it took a hell of a lot for it to get that high in the first place.