Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

barbie (2653)

barbie
  reversethis-{ku. ... m} {ta} {eibrab}
http://barbie.missbarbell.co.uk/

Leader of Birmingham.pm [pm.org] and a CPAN author [cpan.org]. Co-organised YAPC::Europe in 2006 and the 2009 QA Hackathon, responsible for the YAPC Conference Surveys [yapc-surveys.org] and the QA Hackathon [qa-hackathon.org] websites. Also the current caretaker for the CPAN Testers websites and data stores.

If you really want to find out more, buy me a Guinness ;)

Links:
Memoirs of a Roadie [missbarbell.co.uk]
[pm.org]
CPAN Testers Reports [cpantesters.org]
YAPC Conference Surveys [yapc-surveys.org]
QA Hackathon [qa-hackathon.org]

Journal of barbie (2653)

Monday January 19, 2004
07:53 AM

Thoughts for CPAN Testing

[ #16905 ]
In my CPAN Testing talk, I mentioned some thoughts I had for improving CPAN testing. The first two are already part of the test reports, but the third requires a biggish change.

[More METADATA]

When reports are submitted, aside from the status, distribution and platform for the test, there are also the Perl and the Operating System versions included. Having submitted many Win32 reports over the last few months, it's not obvious that I'm testing on Windows 2000 Professional, with ActivePerl 5.6.1. Does this mean the same test on Windows 98 with ActivePerl 5.8.2 will have the same result? In most instances the answer is probably yes, but there are a number of key differences between the two OSs and Perls. The same is true of other OSs too. Aside from just checking the platform tests, it would be nice to firstly see the OS and Perl version appear more prominently on the distribution test pages (html/yaml) and database, and secondly for CPANPLUS to be a bit more thorough when verifying whether the distribution has been tested on the current setup (it currently just checks platform and the number of FAILs and UNKNOWNs).

[WITH WARNINGS]

One thing I have been doing manually for quite sometime, is submitting reports to the testers list and authors when distributions PASS but produce several warnings. So far I haven't received any negative feedback for doing this, but have had a couple of emails thanking me for highlight potential problems. I've always tried to fix warnings in my code (I was quite a dab-hand with lint when a C programmer), and would like others to let me know of warnings should they arise when testing my code. However, to do this with CPAN Testing would require quite a bit of change. I had a brief look at how it all works in CPANPLUS and it's quite complex. I haven't got it working yet, so still do it by hand, but hope to have something figured out eventually. Unless of course the CPANPLUS team think it's a good idea and implement it themselves.

Anyone with thoughts about this. Are they good ideas, worth doing, worth prompting Leon, Antrijus and the CPANPLUS team to take a look?

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Metadata (Score:3, Insightful)

    by acme (189) on 2004.01.19 8:23 (#27527) Homepage Journal
    More metadata in the testers database (and thus on testers.cpan.org) has always been the plan, I just haven't got around to it. It should be fairly simple to parse the reports and extract all sorts of useful information. However, 5.005_04 takes priority this month...
    • Will try look into it further when I have more time (and submit the odd patch or two to help out). The testers database side of things is the relatively easy bit. It's the CPANPLUS bit that got me bogged down in trying to figure where everything gets called and parsed.
  • Muttering about warnings would be useful. I'd certainly appreciate any such reports. They probably shouldn't go into the public test results though, as sometimes the programmer *wants* to spew warnings - eg about using a deprecated interface - and will have documented that, or because the user has called a method which is just a stub for code to be added later or is intended to be overridden in a subclass or whatever.
    • Some authors are aware of potential warnings and print messages as to why they may occur. However, if that's documented then at least if a Warning Report is generated, the potential user can make a judgement as to whether the warnings are relevant. However, I would think any user that is potentially thinking of using a module for production code, may be concerned if warnings exist. In most cases warnings are due to (slightly) broken tests, but there are some that highlight problems on specific platforms (ty
  • I couldn't agree more with your comments. I've had a few test failures recently, but alas the test failure report doesn't tell me enough to have any idea why the module failed on their system, but passes all tests on mine!

    After some digging I think I've figured out my most recent failure, the test report only told me pass on 5.6.x systems, and fail on 5.8 systems. It actually turned out to be an artifact caused by the change in hash ordering between 5.6.x and 5.8.x in the test suite. It's isn't anything t

    --
    -- "It's not magic, it's work..."