Leader of Birmingham.pm [pm.org] and a CPAN author [cpan.org]. Co-organised YAPC::Europe in 2006 and the 2009 QA Hackathon, responsible for the YAPC Conference Surveys [yapc-surveys.org] and the QA Hackathon [qa-hackathon.org] websites. Also the current caretaker for the CPAN Testers websites and data stores.
If you really want to find out more, buy me a Guinness
There is a known problem with CPANPLUS and PREREQ_PM. Unfortunately it is a difficult bug to track down. When you spot it, trying to reproduce it can be elusive. It is not specific to any OS either, as several have experienced the problem. I started to look at the code earlier this year, but after nearly 4 weeks had to draw a blank. The internals of CPANPLUS are complex and following the train of thought for an isolated test case is difficult to navigate. Just around the time I stopped, Jos announced he was rewriting CPANPLUS in an effort to remove some of these bugs that had crept in, and to basically improve the code base. The development team have been hard at work and put in a lot of effort.
So when you receive a mail that appears to personally attack a single tester, regarding the above bug, it really is unwarranted. When you reply to the mail, explaining the situation and the efforts that have (and are) going in to trying to remove the bug, its rather galling to have it fall on deaf ears. The same rant once or twice is sufficient, to regurgitate the same misleading rant at every opportunity like some broken auto-responder is really unhelpful, particularly when the content is inaccurate at best and insulting to all cpan-testers at worst.
In my recent talk at LPW I went through some of the problems that have been surfacing, and for a few I've been looking at ways to automatically verify the results. I have been looking at a different approach to the testers.db, to extend the data to include the perl version and OS version, so that a Win98 Perl 5.6.1 test is different from a WinXP Perl 5.8.4 test, which currently they aren't. There are a few other things I was looking at too, in an effort to make the test database more responsive to bogus reports. There's a lot of work involved, especially when it means intergrate much of the changes into CPANPLUS and CPAN::WWW::Testers, and getting the acceptance of the cpan-testers community that it is a good idea.
I started as a cpan-tester, because the few Windows testers at the time appeared only to be submitting reports when they were installing modules by hand. I thought it was worth trying to test every single module as it was uploaded. Unfortunately for various reasons I can't test every single module, mainly due to external libraries not being installed, but in the main I've managed to test a sizeable selection. Because within our network I don't have direct SMTP access, I patched Test::Reporter with my own module Mail::File, that simply created a single mail file on disk for each report. There are other testers who do something similar. I could then monitor the reports and check for any bogus ones that got through. Out of every 100 there are probably 2-5 that get deleted, and another 5 or so that get created, due to distributions failing in the 'perl Makefile.PL' or 'make' stage that don't get picked up.
I've even made the effort to post PASS reports with additional information regarding warning messages that are produced during testing. Most of the feedback I've sent has been considered constructive and proved helpful to the authors, and ultimately has helped to craft CPAN into a better repository. There is still a lot of work to be done, as The Phalanx and CPANTS projects indicate, but cpan-testers is also a key to that improvement.
One poster claims that having a FAIL report on cpan-testers implies the module is buggy. I don't believe this to be true. There are all sorts of reasons why a module can produce a FAIL report, yet be legitimately installed on that platform. DateTime is a case in point, the failure in the test on Windows is because Windows can be a PITA. The module will still work on Windows if you force the install. cpan-testers reports are there for two reasons. Firstly to help authors craft better distributions, and secondly to indicate to potential users whether they *might* have problems using the module(s). Seeing a FAIL report and not investigating further is to belittle the efforts of the author, CPAN and the community. The cpan-testers reports are a guide NOT a definite statement written in stone.
What galls me most is that the auto-responder is supposedly full of Perl talent, yet trivalises, belittles or ignores the efforts that have gone into improving cpan-testing. If the bug, or any other problem, irritates them so much, perhaps they could offer their talents more productively and actually help fix the bug. I'm sure with their emmense talents they'd find the bug in minutes and have a patch out within the hour.
cpan-testing has been a pretty thankless task. When everything runs smoothly, few seem to care. When a bug raises its head, a minority try and whip up a witchhunt in an effort to ridcule and belittle the efforts of many. Of the top 15 testers in the last 12 months, only 3 are not CPAN authors. Of the other 12, several have some very worthwhile distributions and are by no means illiterate, ignorant or a waste of time. I can only assume that some feel spurned by the fact their beloved CPAN.pm is being usurped by CPANPLUS.pm.
Whatever their reasons, I've had enough of the insults. At the current time, I don't know whether my work so far to improve CPANPLUS and cpan-testing will be finished, handed over or left to gather dust. We'll see after the week off next week. I'm sure a few will be thinking good riddance, but I find it sad that some consider the efforts of cpan-testers so worthless.
So long, from a pond-life ex-cpan-tester.
PS: If you think I'm implicating you, rather than attempt to rile me with more tirades, please follow the advice of two other posters and join the cpanplus-devel mailing list. You might find it a bit more productive.