jesse's Friends' Journals jesse's Friends' use Perl Journals en-us use Perl; is Copyright 1998-2006, Chris Nandor. Stories, comments, journals, and other submissions posted on use Perl; are Copyright their respective owners. 2012-01-25T02:04:50+00:00 pudge Technology hourly 1 1970-01-01T00:00+00:00 jesse's Friends' Journals Flore Louise Apolline Bruhat-Souche <p>On Thursday, August 19, 2010 at 9:30, Flore Louise Apolline Bruhat-Souche was born. She weighs 3.02 kg and measures 48 cm. </p><p> Word already spread through IRC (#perlfr and #yapc mostly) and via email and telephone. </p><p> The mother is fine, the father is slightly tired and the <a href="">big sister</a> is happy. </p><p> There is <a href="">one photo online</a>. </p> BooK 2010-08-20T22:17:07+00:00 journal Announcing CPAN Testers 2.0 <p>After 6 months of development work, following 2 years worth of design and preparation, CPAN Testers 2.0 is finally live.</p><p>With the rapid growth in CPAN Testers environments and testers over the past few years, the previous method of posting reports to a mailing list had reached a point where the scalability was no longer viable. This was recognised several years ago and discussions for a new system had already begun, with the view that reports should be submitted via HTTP.</p><p>At the Oslo QA Hackathon in 2008, David Golden and Ricardo Signes devised the Metabase, with the design work continuing at the Birmingham QA Hackathon in 2009, where David and Ricardo were able to bring others into the thought process to work through potential issues and begin initial coding. A number of releases to CPAN and Github followed, with more people taking an interest in the project.</p><p>The Metabase itself is a database framework and web API to store and search opinions from anyone about anything. In the terminology of Metabase, Users store Facts about Resources. In the Metabase world, each CPAN tester is a User. The Resource is a CPAN distribution. The Fact is the test report. Today that&#8217;s just the text of the email message, but in the future it will be structured data. The Metabase specifies data storage capabilities, but the actual database storage is pluggable, from flat files to relational databases to cloud services, which gives CPAN Testers more flexibility to evolve or scale over time.</p><p>Meanwhile the CPAN Testers community was also attracting more and more interest from people wanting to be testers themselves. As a consequence the volume of reports submitted increased each month, to the point that the mail server was struggling to deal with all the mailing lists it hosted. The cpan-testers mailing list was submitting more posts in one day than any other list submitted in a month (in a year in some cases). Robert and Ask, very reasonably, asked if the testers could throttle their submissions down to 5k report posts a day, and set a deadline of 1st March 2010 to switch off the mailing list.</p><p>David Golden quickly took on the task to envisage a project plan, and work began in earnest in December 2009. With less than 3 months to the cut-off date, there was a lot of work to do. David concentrated on the Metabase, with Barbie working on ensuring that the current cpanstats database and related websites could move to the Metabase style of reports. Despite a lot of hard work from a lot of people, we unfortunately missed the 1st March deadline. Having throttled report submissions to a more manageable level, and although not complete, the target for HTTP submissions was in sight, Robert and Ask were very understanding and agreed to keep us going a little while longer.</p><p>Throughout March and April a small group of beta testers were asked to fire their submissions at the new system. It ironed out many wrinkles and resulted in a better understanding of what we wanted to achieve. The first attempts at retrieving the reports from the Metabase into the cpanstats database began in April, and again highlighted further wrinkles that needed to be addressed. After a month of hard testing and refinement, we finally had working code that went from report submission by a tester, storage into the Metabase, retrieval into the cpanstats database and finally presentation on the CPAN Testers family of websites.</p><p>During June the process was silently switched from testing to live, allowing reports to be fed through into the live websites. Due to the ease with which the new style reporting fit into the existing system, the switch largely went unnoticed by the CPAN testers community as well as the Perl community. A considerable success.</p><p>The CPAN Testers eco-system is now considerably larger than those early days of simply submitting handwritten reports by email to a mailing list, and the work to get here has featured a cast of thousands. Specifically for CPAN Testers 2.0, the following people have contributed code, ideas and effort to the project over the past six months:</p><ul> <li>Andreas K&ouml;nig</li><li>Apocalypse</li><li>Ask Bj&oslash;rn Hansen</li><li>Barbie</li><li>Chris Williams</li><li>Dan Collins</li><li>David Cantrell</li><li>David Golden</li><li>Florian Ragwitz</li><li>H.Merijn Brand</li><li>Jon Allen</li><li>Lars D&#618;&#7431;&#7428;&#7435;&#7439;&#7457; &#36842;&#25289;&#26031;</li><li>L&eacute;on Brocard</li><li>MW487</li><li>Nigel Horne</li><li>Ricardo Signes</li><li>Richard Dawe</li><li>Robert Spier</li><li>Serguei Trouchelle</li><li>Shlomi Fish</li><li>Slaven Rezi&#263;</li></ul><p>Barbie and David would like to thank everyone for their involvement. Without these guys CPAN Testers 2.0 would not have been possible. Thanks to everyone, we can now look forward to another 10 years and more of CPAN Testers.</p><p> <a href="">CPAN Testers</a> now holds over 7.5 million test reports covering nearly 11 years worth of testing Perl distributions. There have been over 1,000 testers in that time, and every single one has helped the CPAN Testers project to be the largest single community supported testing system of any programming language. For a full list of everyone who has contributed, visit the <a href="">CPAN Testers Leaderboard</a>. A huge thank you to everyone.</p><p>With the Metabase now online and live, we can now announce an absolute deadline to close the mailing list. This is currently set as 31st August 2010. After this date all submissions via email will be rejected, and testers will be encouraged to upgrade their testing tools to take advantage of the new HTTP submission system. Many of the high volume testers have already moved to the new system, and we expect nearly everyone else to move in the next month. We will be tailing the SMTP submissions to catch those who haven't switched, such as some of the more infrequent testers, and warn them of the deadline.</p><p>More work is planned for CPAN Testers, from further validation and administration of reports, to providing more functionality for alternative analysis and search capabilities. Please check the <a href="">CPAN Testers Blog</a> for our regular updates.</p><p>If you'd like to become a CPAN Tester, please check the <a href="">CPAN Testers Wiki</a> for details about setting up a smoke testing environment, and join the <a href="">cpan-testers-discuss mailing list</a> where many of the key members of the project can offer help and advice.</p><p>You can find out more about CPAN Testers at two forthcoming conferences. David Golden will be presenting <a href="">"Free QA! What FOSS can Learn from CPAN Testers"</a> at OSCON and Barbie will be presenting <a href="">"CPAN Testers 2.0 : I love it when a plan comes together"</a> at YAPC::Europe.</p><p>CPAN Testers is sponsored by Birmingham Perl Mongers, and supported by the Perl community.</p><p>You can now <a href="">download the full and complete Press Release</a> from the CPAN Testers Blog. If you have access to further IT news reporting services, please feel free to submit the Press Release to them. Please let us know if you are successful it getting it published.</p><p>Cross-posted from the <a href="">CPAN Testers Blog</a> </p> barbie 2010-07-05T09:50:22+00:00 journal Technical Meeting - Wednesday 26th May 2010 <code> Event:&nbsp;&nbsp;&nbsp; Technical Meeting<br> Date:&nbsp;&nbsp;&nbsp;&nbsp;Wednesday 26th May 2010<br> Times:&nbsp;&nbsp;&nbsp;from 7pm onwards (see below)<br> Venue:&nbsp;&nbsp;&nbsp;The Victoria, 48 John Bright Street, Birmingham, B1 1BN.<br> Details:&nbsp;<a href=""></a> <br> </code> <p> <b>Talks:</b> </p><ul> <li>Accelerated web development with Catalyst [Richard Wallman]</li><li>CPAN Testers 2.0 - "I love it when a plan comes together" [Barbie]</li></ul><p> <b>Details</b> </p><p>This month we welcome a returning guest speaker, Richard Wallman, who will be taking a look at how Catalyst has eased the development lifcycle of websites, from his own experiences. In addition I'll be looking at the progress of the CPAN Testers 2.0, and looking at some of the near future plans for CPAN Testers.</p><p>As per usual, this month's technical meeting will be upstairs at The Victoria. The pub is on the corner of John Bright Street and Beak Street, between the old entrance to the Alexandra Theatre and the backstage entrance. If in doubt, the main entrance to the Theatre is on the inner ring road, near the Pagoda roundabout. The pub is on the road immediately behind the main entrance. See the map link on the website if you're stuck.</p><p>As always entry is free, with no knowledge of Perl required. We'd be delighted to have you along, so feel free to invite family, friends and colleagues<nobr> <wbr></nobr>;)</p><p>Some of us should be at the venue from about 7.00pm, usually in the backroom downstairs. Order food as you get there, and we'll aim to begin talks at about 8pm. I expect talks to finish by 9.30pm, with plenty of time for discussion in the bar downstairs.</p><p> <b>Venue &amp; Directions:</b> </p><p> The Victoria, 48 John Bright Street, Birmingham, B1 1BN<br> - <a href=";from=&amp;promotion=">Pub Details</a> <br> - <a href="">Picture</a> <br> - <a href=";client=firefox-a&amp;q=the+victoria+pub&amp;near=Birmingham&amp;radius=0.0&amp;cd=1&amp;cid=52482921,-1893619,7492755984503563963&amp;li=lmd&amp;z=14&amp;t=m">Google Map</a></p><p>The venue is approximately 5-10 minutes walk from New Street station, and about the same from the city centre. On street car parking is available see full details and directions on the <a href="">website</a>.</p><p> <b>Times:</b> </p><p>These are the rough times for the evening:</p><ul> <li>food available until 9.00pm</li><li>talks: 8.00-10.00pm</li><li>pub closes: 11.00pm</li></ul><p>Please note that beer will be consumed during all the above sessions<nobr> <wbr></nobr>;)</p> barbie 2010-05-24T19:26:06+00:00 journal Fixing Mailman with Perl <p>Mailman is useful. Mailman works. Mailman is ubiquitous. I am subscribed to over 50 mailing-lists managed by Mailman.</p><p> But Mailmand is software, and therefore <a href="">hateful</a>. </p><p>My particular Mailman hate is the <code>nodupes</code> parameter.</p><blockquote><div><p> <i> <b>Avoid duplicate copies of messages?</b> </i> </p><p> <i>When you are listed explicitly in the To: or Cc: headers of a list message, you can opt to not receive another copy from the mailing list. Select Yes to avoid receiving copies from the mailing list; select No to receive copies.</i> </p><p> <i>If the list has member personalized messages enabled, and you elect to receive copies, every copy will have a X-Mailman-Copy: yes header added to it.</i> </p></div> </blockquote><p>I like duplicate email. Moreover, I like the <code>List-Id</code> header that makes emails sent through a list <i>special</i> (at least in the sense that they can be filtered <i>automatically</i> by more tools, and I can just delete the stuff that piles up in my Inbox). And by the way, how could Mailman be really sure that I got that other copy? Just because the headers say so? Bah.</p><p>Oh, and I also hate the fact that <i>Set globally</i> never worked for me with this option.</p><p>So, because I'm lazy, and I don't want to go clikety-click to first, get a reminder of the random password that was assigned to me years ago, and two, login and change that annoying option, and because <b>I don't want to do that fifty times, over and over again</b>...</p><p> I wrote and put on CPAN <a href="">WWW::Mailman</a>, designed to automate that kind of tedious task out of my life (and hopefully yours). Examples included, I know you're lazy too. </p><p><small>PS: I've been told there <i>is</i> a command-line interface to Mailman, but it is reserved to people managing Mailman on the server.</small></p> BooK 2010-03-25T01:07:02+00:00 journal CPAN Testers Summary - December 2009 - The Wall <p>Cross-posted from the <a href="">CPAN Testers Blog</a>.</p><p>Last month CPAN Testers was finally <a href="">given a deadline</a> to complete the move away from SMTP to HTTP submissions for reports. Or perhaps more accurately to move away from the servers, as the amount of report submissions has been affecting support of other services to the Perl eco-system. The deadline is <b>1st March 2010</b>, which leaves just under 2 months for us to move to the CPAN Testers 2.0 infrastructure. Not very long.</p><p> <b>David Golden</b> has now put together a <a href="">plan of action</a>, which is being rapidly consumed and worked on. The first fruits of which has been an update to the <a href="">CPAN Testers Reports</a> site. The ID previously visible on the site, refering to a specific report, is now being hidden away. The reason for this is that the current ID refers to the NNTP ID that is used on the NNTP archive for the <i>cpan-testers</i> mailing list. This ID is specific to the SMTP submissions and includes many posts which are not valid reports. As such we will be moving to a GUID as supplied by the Metabase framework, with existing valid SMTP submitted reports being imported into the Metabase. The NNTP ID will eventually be completely replaced by the Metabase GUID across all parts of the CPAN Testers eco-system, including all the databases and websites. As such you will start to see a transition over the next few weeks.</p><p>The second change which has now been implemented, is to present the reports via the <a href="">CPAN Testers Report</a> site and not the NNTP arcive on the servers. Currently the presentation of a report (e.g. <a href="">this report for App-Maisha</a>) is accessed via the reports pages for a distribution or an author, but will also be accessible in a similar manner across all the CPAN Testers websites. There are a large batch of early reports that are currently missing from the database, but these are being updated now, and will hopefully be complete within the next few days. If you have any issues with the way the reports are presented, including any broken or missing links from other parts of the site, please let me know.</p><p>In all this change, there is one aspect that may worry a few people, and that is the <i>"Find A Tester"</i> application. For the next few months it will still exist, but the plan is to make the Reports site more able to provide tester contact information. In addition to this the testers themselves will soon have the ability to update their own profiles. Initially this will be used to link email addresses to reports and then map those email addresses to a profile held wihtin the Metabase, but in the longer term will be used to help us manage the report submissions better.</p><p>David Golden is concentrating on the Client and Metabase parts of the action plan, and I am working on porting the websites and 'cpanstats' database. If you have any free time and would like to help out, please review the <a href="">action plan</a>, join the <a href=""> <i>cpan-testers-discuss</i> mailing list</a>, and please let us know where you'd like to help. There is a lot of work to be done and the more people involved, the better the spread of knowledge in the longer term.</p><p>After David announced the <a href="">deadline last month</a>, all the testers have throttled back their smoke bots. This saw a dramatic reduction in the number of reports and page being processed, and enabled the Reports Page Builder to catchup with itself, to the point it was frequently having less than a 1000 request waiting. That changed yesterday with the changes to the website, as every page now needs to be updated. It typically takes about 5 days to build the complete site, so this quiet period will help allow the Builder to rebuild the site, without adversely affecting the currently level of report submissions. Expect the site to reach a more managable level of processing some time next week. To help monitor the progress of the builder, a new part of the Reports site, <a href="">The Status Page</a>, now checks the status of all outstanding request every 15 minutes, providing a 24 hour persepctive and a week long perspective.</p><p>A new addition to the family was also launched recently, the <a href="">CPAN Testers Analysis</a> site, which <b>Andreas K&#246;nig</b> has been working on, to help authors identify failure trends from reports for their distributions. Read more on <a href="">Andreas' blog</a>.</p><p>Last month we had a total of 168 tester addresses submitting reports. The mappings this month included 22 total addresses mapped, of which 2 were for newly identified testers. Another low mapping month, due to work being done on CPAN Testers as a whole.</p><p>My thanks this month go to <b>David Golden</b> for finding the time to write an action plan, and <b>his wife</b> for allowing him the time to write it, as well as working on all the other areas involving the CPAN Testers and the Metabase<nobr> <wbr></nobr>:)</p> barbie 2010-01-07T13:54:38+00:00 journal Pink stinks <p>I still like wearing pink T-shirts, though. And not just at Perl conferences. (My love of pink basically comes from pushing a "joke" from the YAPC Europe 2002 auction way beyond its scope.)</p><p>Some very interesting reads:</p><ul> <li> <a href=""></a> </li><li> <a href=""></a> (I bought the T-shirt and buttons)</li><li> <a href=""></a> (the blog)</li></ul><p>A little more than a week ago, it was the second Christmas of my daughter... I guess it shows. Raising children brings all kinds of new interesting issues and questions to one's attention.</p> BooK 2010-01-05T00:35:05+00:00 journal git-move-dates <p>As a Perl programmer and Open Source enthousiast, you probably sometimes contribute to Open Source projects. Maybe even (gasp!) during work hours. If your employer is jealous of your time, you probably do not want your commits to <i>look</i> like they were done during work hours (especially in tools like the GitHub Punch Card).</p><p>On the other hand, it doesn't make sense to not commit your changes, and lose the benefits of using Git, just so that the reality of <i>when</i> you worked on these tiny changes is not made public. (At that point, it would probably also make more sense to have an open discussion with your boss...)</p><p>The way Git handles history makes it really easy to change the date of commits on a local branch. When I first thought about it, my idea was to write some date manipulation code (move a bunch of commits from a time range to another with all kinds of fancy nooks and crannies) and manipulate the Git trees and commits myself.</p><p>Then I discovered <b>git filter-branch</b>, which is all about rewriting commits. And I realized that in the situation above, moving commits a few hours in the future (like ten minutes before actually using <b>git push</b> or <b>git send-email</b>) is largely sufficient.</p><p>The problem is that the code to move a bunch of commits one hour in the future looks like this:</p><blockquote><div><p> <tt>&nbsp; &nbsp; git filter-branch --commit-filter '\<br>&nbsp; &nbsp; &nbsp; GIT_AUTHOR_DATE=`echo "$GIT_AUTHOR_DATE"|perl -pe'\''s/\d+/$&amp;+3600/e'\''`;\<br>&nbsp; &nbsp; &nbsp; GIT_COMMITTER_DATE=`echo "$GIT_COMMITTER_DATE"|perl -pe'\''s/\d+/$&amp;+3600/e'\''`;\<br>&nbsp; &nbsp; &nbsp; git commit-tree "$@"' -- &lt;rev-list&gt;</tt></p></div> </blockquote><p>Which is impossible to remember, and painful to write.</p><p>So, lazy as a Perl programmer should be, I just wrote <b>git-move-dates</b>, that writes and runs the above type of command-lines for me. Useful options include <i>--committer</i> and <i>--author</i> (to change only one of the two existing dates), and options ranging from <i>--seconds</i> to <i>--weeks</i> to define the exact timespan of your commits' time-travels.</p><p>As with my other Git gadgets, the source is available from <a href=""></a>.</p><p>And remember: there's nothing wrong with rewriting history, as long as it's <i>unpublished</i>, local history.<nobr> <wbr></nobr><code>;-)</code> </p> BooK 2009-12-15T22:22:40+00:00 journal CPAN Testers Summary - November 2009 - Abbey Road <p>Cross-posted from the <a href="">CPAN Testers Blog</a>.</p><p>In November we reached the <a href="">6 million reports</a> submitted mark. It's quite staggering how many reports are being submitted these days. It's now roughly 1 million reports every 3 months! So expect a 10 million reports post some time in August 2010<nobr> <wbr></nobr>:)</p><p>Now that we are producing so many reports, while there is a desire to get more reports from less tested operating systems, Tim Bunce recently highlighted his interest in getting reports that included a diverse set of Perl configuration flags, in particular regarding how Perl was compiled (with and without threads, etc). At the moment the CPAN Testers Statistics database doesn't include that information, but the Metabase that is behind CPAN Testers 2.0 will. In addition the Metabase will be able to be queried to glean the reports that contain a specific set of flags, etc. At the moment there are quite a few different setups testing on the top few operating systems being tested. While some authors see these as just repeated results, in some cases they provide slight differences in the test results. This is particularly what Tim was interested in for <a href="">Devel-NYTProf</a>. Hopefully we'll be closer to getting more of that information more readily available soon. In the meantime, if you do want to get involved with CPAN Testers, and only have a traditional operating system available, take a look at some of the reports posted by current testers for the same platform, and see what different setups you could provide.</p><p>In the CPAN Testers namespace, CPAN has seen a new upload, <a href="">CPAN-Testers-Data-Addresses</a>. This release will be the new way for me to manage the tester address mappings. To begin with the testing is being run stand-alone, but it will be shortly be integrated to the <a href="">CPAN Testers Statistics</a> website. From there it will also be integrated into the new site that is hopefully being launched early next year, which will allow testers to register their testing addresses (among other things). More uploads to the CPAN Testers namespace are being worked on, in particular ones to provide a more programmatic access to the CPAN Testers APIs. More news on those hopefully next month.</p><p>This weekend sees the annual <a href="">London Perl Workshop</a>. Featured in the schedule is <a href="">Chris 'BinGOs' Williams</a>' talk "<a href="">Rough Guide to CPAN Testing</a>". If you are a CPAN Tester and are planning to attend the event, please come along and say hello<nobr> <wbr></nobr>:)</p><p>Last month we had a total of 164 tester addresses submitting reports. The mappings this month included 17 total addresses mapped, of which 7 were for newly identified testers. A bit of a low mapping month, mostly due to my attention being elsewhere. With the new mapping system hopefully this will become a little more streamlined for next year.</p><p>Until next time, happy Christmas testing<nobr> <wbr></nobr>:)</p> barbie 2009-12-04T10:52:07+00:00 journal More Git gadgets <p>Last week on p5p, after noticing that the new way to name version tags in perl5.git was to use <b>v<i>$version</i> </b>, I proposed to copy the old <b>perl-<i>$version</i> </b> tags to the new scheme, going back at least to 5.6.0 (which is when the Perl naming convention for version changed). I haven't done it yet for various reasons, but in order to do it, I developped a small <b>git-copytag</b> tool, that copies (or moves) the tag with all its annotations. Clone it from <a href=""></a>.</p><p>I also started to check if the existing tags in perl5.git point to the commit matching the published distribution, but my first attempt failed miserably (creating Git <code>tree</code> objects from the distributions and trying to find a commit pointing to the same <code>tree</code> id). I'll move on to a different approach (<code>diff -r</code>, basically).</p><p> In other news, I finally succeeded in creating the <a href="/~BooK/journal/39919">Git fractals</a> I wanted. I'll soon post more details about it on the <a href="">page dedicated to Sierpi&#324;ski triangles</a> on <a href="">my personal site</a>.</p> BooK 2009-12-03T08:55:03+00:00 journal Spiteful spam <p> I know that a lot of people are moving their blogs over to <a href=""></a>, leaving <a href=""></a> behind. Part of the frustration is that Chris Nandor, Pudge, hasn't done much to modernize, but hey, it's Pudge's choice, and he runs the site, and we're all here by grace of him running it. Beggars and choosers, y'know. If you're frustrated with a Perl news site, you can go <a href="">start your own</a>. </p><p> So certainly, I think this spam I just received is just out of line.</p><blockquote><div><p> <tt>From: GreatestColonHealth &lt;;<br>Subject: With This Astounding Cleanser You May Eliminate Pudge</tt></p></div> </blockquote><p>That's just nasty!</p> petdance 2009-12-01T16:53:59+00:00 journal Six Million Reports! <a href="">CPAN Testers tops 6 million reports</a> barbie 2009-11-23T13:42:09+00:00 journal git-mtime <p>Ever complained that on a checkout Git did not reset the <code>mtime</code> of your files to the date when they were commited?</p><p> My <a href="">home page</a> is generated with Template Toolkit but a script stored in the <tt>post-receive</tt> hook. On a checkout Git only updates the files that have changed, so normally I can trust <tt>template.modtime</tt> to be correct and use it to show a <i>Last mofidied</i> date. </p><p>But I'm a perfectionist, and I want to be extra sure. So I created this little utility, that I called <b>git-mtime</b>:</p><blockquote><div><p> <tt>&nbsp; &nbsp; #!/bin/sh<br>&nbsp; &nbsp; git log --name-only --date=iso --reverse --pretty=format:%at "$@" \<br>&nbsp; &nbsp; | perl -00ln -e '($d,@f)=split/\n/;$d{$_}=$d for grep{-e}@f' \<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;-e '}{utime undef,$d{$_},$_ for keys%d'</tt></p></div> </blockquote><p>Note that it passes all parameters to <code>git log</code>, so you can apply it on a subdirectory (using <code>--</code>), or even use the dates from another branch (though I'm not sure what use this can have).</p><p> <small>And for extra bonus points, it uses the secret eskimo greeting operator!</small> </p><p>Now that I have a few gadgets based on Git, I thought I might as well publish them somewhere. A quick look on Github ruled out <b>git-tools</b>, <b>git-utils</b> and <b>git-extras</b> (come on people, most of these things could be done with Git aliases!). <b>git-aid</b> (especially the plural) didn't seem like a good name either. So after looking around for synonyms, I settled on <b>git-gadgets</b>.</p><p>Clone it from <a href=""></a>.</p> BooK 2009-11-23T12:41:17+00:00 journal Perl 5.11.2 <div><p>&#160; The streets were pretty quiet, which was nice. They&#39;re always quiet here<br>&#160; at that time: you have to be wearing a black jacket to be out on the<br> &#160; streets between seven and nine in the evening, and not many people in the<br> &#160; area have black jackets. It&#39;s just one of those things. I currently live<br> &#160; in Colour Neighbourhood, which is for people who are heavily into colour.<br> &#160; All the streets and buildings are set for instant colourmatch: as you<br> &#160; walk down the road they change hue to offset whatever you&#39;re wearing.<br> &#160; When the streets are busy it&#39;s kind of intense, and anyone prone to<br> &#160; epileptic seizures isn&#39;t allowed to live in the Neighbourhood, however<br> &#160; much they&#39;re into colour.<br> &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;- Michael Marshall Smith, &quot;Only Forward&quot; </p><p> It gives me great pleasure to announce the release of Perl 5.11.2. </p><p> This is the third DEVELOPMENT release in the 5.11.x series leading to a stable release of Perl 5.12.0. You can find a list of high-profile changes in this release in the file &quot;perl5112delta.pod&quot; inside the distribution. </p><p> You can download the 5.11.2 release from: </p><p>&#160;&#160; <a href=""></a> </p><p> The release&#39;s SHA1 signatures are: </p><p>&#160;&#160; 2988906609ab7eb00453615e420e47ec410e0077 &#160;perl-5.11.2.tar.gz</p><div><p>&#160;&#160; 0014442fdd0492444e1102e1a80089b6a4649682 &#160;perl-5.11.2.tar.bz2<br> <br> We welcome your feedback on this release. If you discover issues with Perl 5.11.2, please use the &#39;perlbug&#39; tool included in this distribution to report them. If Perl 5.11.2 works well for you, please use the &#39;perlthanks&#39; tool included with this distribution to tell the all-volunteer development team how much you appreciate their work.<br> <br> If you write software in Perl, it is particularly important that you test your software against development releases. While we strive to maintain source compatibility with prior stable versions of Perl wherever possible, it is always possible that a well-intentioned change can have unexpected consequences. If you spot a change in a development version which breaks your code, it&#39;s much more likely that we will be able to fix it before the next stable release. If you only test your code against stable releases of Perl, it may not be possible to undo a backwards-incompatible change which breaks your code.<br> <br> Notable changes in this release:</p><ul> <li> It is now possible to overload the C operator</li><li> Extension modules can now cleanly hook into the Perl parser to define new kinds of keyword-headed expression and compound statement</li><li> The lowest layers of the lexer and parts of the pad system now have C APIs available to XS extensions</li><li> Use of C&lt;:=&gt; to mean an empty attribute list is now deprecated</li></ul><p> Perl 5.11.2 represents approximately 3 weeks development since Perl 5.11.1 and contains 29,992 lines of changes across 458 files from 38 authors and committers:<br> <br> Abhijit Menon-Sen, Abigail, Ben Morrow, Bo Borgerson, Brad Gilbert, Bram, Chris Williams, Craig A. Berry, Daniel Frederick Crisman, Dave Rolsky, David E. Wheeler, David Golden, Eric Brine, Father Chrysostomos, Frank Wiegand, Gerard Goossen, Gisle Aas, Graham Barr, Harmen, H.Merijn Brand, Jan Dubois, Jerry D. Hedden, Jesse Vincent, Karl Williamson, Kevin Ryde, Leon Brocard, Nicholas Clark, Paul Marquess, Philippe Bruhat, Rafael Garcia-Suarez, Sisyphus, Steffen Mueller, Steve Hay, Steve Peters, Vincent Pit, Yuval Kogman, Yves Orton, and Zefram.<br> <br> Many of the changes included in this version originated in the CPAN modules included in Perl&#39;s core. We&#39;re grateful to the entire CPAN community for helping Perl to flourish.<br> <br> Jesse Vincent or a delegate will release Perl 5.11.3 on December 20, 2009. Ricardo Signes will release Perl 5.11.4 on January 20, 2010. Steve Hay will release Perl 5.11.5 on February 20, 2010.<br> <br> Regards, L&#233;on</p></div></div> acme 2009-11-21T08:56:15+00:00 journal Git fractals <p> Last week (November 11) over dinner in Amsterdam, I talked with a colleague about Git as a tool for creating graphs. For some reason I started to think about a <a href="">Sierpir&#324;ski triangle</a>, and we started trying to create such graphs with Git. </p><p>The basic shape is a triangle (in the UTF-8 drawing below, the arrows represent the parent &#8594; child relationship):</p><p> <code>&nbsp;&nbsp;&nbsp;&nbsp;&#8901;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#8901;</code> </p><p>It is quite easy to create by hand. I did it using <code>git commit-tree</code>, using always the same tree object (the empty tree), as we only care about the graph that represents commit lineage, not about the content.</p><p>The next step basically repeats the same shape, attached to the low-level nodes / commits of the previous graph:</p><p> <code>&nbsp;&nbsp;&nbsp;&nbsp;&#8901;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#8901;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;&nbsp;&#8595;&nbsp;&#8600;&nbsp;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#8901;</code> </p><p>I could create it in a couple of minutes, with a few more <code>git commit-tree</code> commands.</p><p>After that, it stops being interesting to do by hand, and one wants to program it. My goal has been to create the following shape, and larger ones, using a Perl program.</p><p> <code>&nbsp;&nbsp;&nbsp;A<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8901;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#8901;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;&nbsp;&#8595;&nbsp;&#8600;&nbsp;<br> &nbsp;&nbsp;B&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#215;&nbsp;&#8594;&nbsp;&#8901;&nbsp;C<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#8901;&nbsp;&nbsp;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#8901;<br> &nbsp;&nbsp;&nbsp;&nbsp;&#8595;&nbsp;&#8600;&nbsp;&#8595;&nbsp;&#8600;&nbsp;&#8595;&nbsp;&#8600;&nbsp;&#8595;&nbsp;&#8600;&nbsp; <br> &nbsp;&nbsp;&nbsp;&nbsp;&#8901;&nbsp;&#8594;&nbsp;&#215;&nbsp;&#8594;&nbsp;&#215;&nbsp;&#8594;&nbsp;&#215;&nbsp;&#8594;&nbsp;&#8901;</code> </p><p>Until now, I've been trying several recursive approaches, and failed miserably each time. The big issue is the merge points, showed in the graph above using &#215;.</p><p>In my recursive approaches, I usually first created the triangle (ABC), usually and then started again from B and then C. But the last merge of B (the &#215; in the middle of the bottom line) can only be created once the vertical line starting from C has been started. I should probably keep state in some way, but haven't had much time to spend on this.</p><p> I also thought about using the mapping from <a href="'s_triangle">Pascal's triangle</a> (odd numbers as dots and even numbers as empty space, see both Wikipedia pages for details), but haven't actually tried it yet. </p><p> In other news, I've started to take care of my personal web site again. <a href="">It hadn't been touched since 2002</a>, so <a href="">I gave it a facelift</a>. Surprisingly enough, the only section yet is about <a href="">Git fractals</a>.<nobr> <wbr></nobr><code>;-)</code> Links to Git repositories, GraphViz images and more successful attempts with other fractals are also available there. </p> BooK 2009-11-20T00:43:58+00:00 journal Sponsor CPAN Testers (again) <p>Cross-posted from the <a href="">CPAN Testers Blog</a>.</p><p>Back in 2008, it was obvious that the fragmentation of CPAN Testers sites was a problem. The system was slow, usually getting updated just once a day, and the presentation was a little disjointed. At that point a dedicated server was suggested, as this would bring a number of the key sites together and potentially provide a base with which to improve the updates of the sites. In addition it was seen a first step towards <a href="">CPAN Testers 2.0</a>.</p><p>In late September 2008 a proposal was put forward to the members of <a href="">Birmingham Perl Mongers</a>, to donate funds to aquiring a dedicate server to host a range of sites and databases. Unanimously they approved the proposal and a server was paid for at <a href="">Hetzner Online AG</a>. Based in Germany on a high bandwidth line, the server has enabled CPAN Testers to grow and now supports a dynamic set of sites and databases that are a consistent benefit to authors and users.</p><p>The server was covered for 1 year, with the intention of looking for a corporate sponsor to continue the funding for further years. However, due to the recent economic climate, the opportunities for funding appear to be limited. As such, recently another proposal was put to the members of Birmingham Perl Mongers, and once again they unanimously approved it. The server and hosting is now paid up for another year, and plans are afoot to further increase the family of sites and provide more resources to authors, testers and users.</p><p>Many thanks to all of <a href="">Birmingham Perl Mongers</a> for their continued support of the CPAN Testers project.</p> barbie 2009-11-17T08:45:40+00:00 cpan git-dot <p>I'm currently experimenting with creating graphs using Git... I'm not using historical data, or even data at all (I'll soon know the SHA1 of the empty tree by heart), just nodes with <code>%s</code> as their label (I have yet to find a use for the rest of the metadata).</p><p> <b>gitk</b> is nice to look at historical information, but not so good for graphs. On the other hand, GraphViz is great for showing graphs.</p><p>What's best than Perl (and a tiny wrapping of shell on top) to produce graphs?</p><blockquote><div><p> <tt>&nbsp; &nbsp; #!/bin/sh<br>&nbsp; &nbsp; # create a good looking graph with dot<br>&nbsp; &nbsp; echo "digraph G {"<br>&nbsp; &nbsp; git log --pretty=format:"%h %p" $* \<br>&nbsp; &nbsp; | perl -lna&nbsp; -e 'print qq("$F[0]";),map{qq("$_"-&gt;"$F[0]";)}@F[1..$#F]'<br>&nbsp; &nbsp; echo "}"</tt></p></div> </blockquote><p>The output of this is usually boring, so just but pipe it to <code>dot -Tpng -ograph.png</code> and watch the pretty pictures.</p><p>Also, imagine a graph that has a full filesystem attached to each node. This is exactly the kind of stuff that Git can give us.</p><p>Not that I have any idea what this could be used for...</p> BooK 2009-11-12T16:50:11+00:00 journal nopaste <p>Just a quick post to show off a small and useful script I use whenever I need to "nopaste" some text or code:</p><blockquote><div><p> <tt>&nbsp; &nbsp; #!/usr/bin/perl -w<br>&nbsp; &nbsp; use strict;<br>&nbsp; &nbsp; use WWW::Mechanize;<br>&nbsp; &nbsp; use Getopt::Long;<br> <br>&nbsp; &nbsp; my %SITE = (<br>&nbsp; &nbsp; &nbsp; &nbsp; snit&nbsp; =&gt; '',<br>&nbsp; &nbsp; &nbsp; &nbsp; scsys =&gt; '',<br>&nbsp; &nbsp; );<br> <br>&nbsp; &nbsp; my %CONF = (<br>&nbsp; &nbsp; &nbsp; &nbsp; channel =&gt; '',<br>&nbsp; &nbsp; &nbsp; &nbsp; nick&nbsp; &nbsp; =&gt; '',&nbsp; &nbsp; &nbsp; &nbsp;# use your own<br>&nbsp; &nbsp; &nbsp; &nbsp; summary =&gt; '',<br>&nbsp; &nbsp; &nbsp; &nbsp; paste&nbsp; &nbsp;=&gt; '',<br>&nbsp; &nbsp; &nbsp; &nbsp; site&nbsp; &nbsp; =&gt; 'snit',<br>&nbsp; &nbsp; &nbsp; &nbsp; list&nbsp; &nbsp; =&gt; '',<br>&nbsp; &nbsp; );<br> <br>&nbsp; &nbsp; GetOptions( \%CONF, 'lang=s', 'nick=s', 'summary|desc=s', 'paste|text=s',<br>&nbsp; &nbsp; &nbsp; &nbsp; 'list!', 'site=s' )<br>&nbsp; &nbsp; &nbsp; &nbsp; or die "Bad options";<br> <br>&nbsp; &nbsp; die "No such paste site: $CONF{site}\nValid choices: @{[keys %SITE]}\n"<br>&nbsp; &nbsp; &nbsp; &nbsp; if !exists $SITE{ $CONF{site} };<br> <br>&nbsp; &nbsp; my $m = WWW::Mechanize-&gt;new;<br>&nbsp; &nbsp; $m-&gt;get( $SITE{ $CONF{site} } );<br>&nbsp; &nbsp; die $m-&gt;res-&gt;status_line unless $m-&gt;success;<br> <br>&nbsp; &nbsp; if ( $CONF{list} ) {<br>&nbsp; &nbsp; &nbsp; &nbsp; print "Possible channels for $CONF{site}:\n",<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; map {"- $_\n"} grep $_,<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; $m-&gt;current_form()-&gt;find_input('channel')-&gt;possible_values;<br>&nbsp; &nbsp; &nbsp; &nbsp; exit;<br>&nbsp; &nbsp; }<br> <br>&nbsp; &nbsp; unless ( $CONF{paste} ) {<br>&nbsp; &nbsp; &nbsp; &nbsp; $CONF{summary} ||= $ARGV[0] || '-';<br>&nbsp; &nbsp; &nbsp; &nbsp; $CONF{paste} = join "", &lt;&gt;;<br>&nbsp; &nbsp; }<br> <br>&nbsp; &nbsp; delete @CONF{qw( site list )};<br>&nbsp; &nbsp; $m-&gt;set_fields(%CONF);<br>&nbsp; &nbsp; $m-&gt;submit;<br>&nbsp; &nbsp; die $m-&gt;res-&gt;status_line unless $m-&gt;success;<br> <br>&nbsp; &nbsp; print +( $m-&gt;links )[0]-&gt;url, "\n";</tt></p></div> </blockquote><p>Since it works has a filter, I can call it from vim or pipe to it. It also works with a file parameter, which is used to set the paste title.</p><p>Just before posting this, I looked again on CPAN, and found the follwing:</p><ul> <li> <p> <b>App::NoPaste</b>: Seems really complete. But does much more than I need, and I like depending only on WWW::Mechanize for such tools.</p></li><li> <p> <b>WWW::Rafb</b>: Well, in an earlier version my script worked with, but the site itself is down.</p></li><li> <p> <b>WWW::PasteBin</b>: Such a huge collection of distributions, I wouldn't which to install first.</p></li><li> <p> <b>WebService::NoPaste</b>: Found it years ago when looking for a nopaste utility, but I preferred to write my own.</p></li></ul><p>Clearly, there is no lack of modules to nopaste stuff, so I'm not going to add my own to the list.<nobr> <wbr></nobr><code>:-)</code> </p><p>I tried publishing scripts on CPAN, but I feel the toolchain is not really targetting scripts, anyway. Anyway, I have a few utility scripts like the above lying around, and I'm thinking maybe the best way to go with them nowadays is to just publish them on GitHub. Might happen someday.</p> BooK 2009-11-08T16:34:56+00:00 journal CPAN Testers Summary - October 2009 - Live ... In The Raw <p>Cross-posted from the <a href="">CPAN Testers Blog</a>.</p><p>If you've not been following the <a href="">CPAN Testers</a> in the last month, you will likely have missed the updates to the <a href="">CPAN Testers Statistics</a> site. I would like to thanks <strong>MW487</strong>, <strong>JJ</strong> and <strong>Colin Newell</strong> for their thoughts and suggestions. The biggest changes have been around the matrices. The old matrices have been thrown away and a completely new set have been created, merging much of the data that was previously across the two old style matrices. The site now also looks at the OS system, rather than the specific version installed, which now gives a better general overview. In turn a new <a href="">OS table</a> is also available highlighting the number of tests per month are attributed to a particular OS. Unsurprisingly Linux is currently streets ahead of any other OS.</p><p>The graphs have always been of interest to those wishing to use them to promote Perl and CPAN, however, the way they are currently presented, doesn't always suit everyone, especially if they wish to change the style or take a different snapshot of the data. As such, you can now download the raw data files used to generate the graphs. All the files are in CSV format, so are easily loaded into you spreadsheet application of choice. Speaking of spreadsheets, in addition to changing the look of the matrices, you can now also download an XLS version of each matrix, as well as now having the ability to view each table in a widescreen format.</p><p>A new graph available is the <a href="">Performance Graph</a>, which shows how the CPAN Testers Reports Page Builder is performing each day, against the volume of reports submitted per day. While the majority of the time the Builder does perform well, every so often it slows due to the load on the web server, meaning it has to occasionally catch up, which can take several days. Now you can see whether any issues have caused your page to take a little longer to build, as well get a better idea of how many reports are getting submitted every day.</p><p>The most recent update has been the new dashboard on the homepage. Every so often I get asked how many CPAN distributions are on CPAN. Although the <a href="">CPAN Statistics</a> have had their own page for a while now, some have mentioned that it would be really cool to have a ticker that flips as a new upload gets added to CPAN. Although I can't do that just yet in true realtime, the new dashboard does try an emulate the rate at which reports and uploads have been submitted over the previous 24 hours.</p><p>In other CPAN related news the proposals and discussions for <a href="">Meta-Spec 2.0</a> have now come to a close. David Golden is currently <a href="">accepting patches</a> to the approved proposals and hopefully we'll have a new draft specification available soon. It's been an interesting discussion in some cases, while others have been agreed or rejected almost without question. Some require a bit more thought, so it's likely there will be a further refinement of the spec in the future. If you want to read all the threads, visit the <a href="">mailing list archives</a>.</p><p>Last month we had a total of 171 tester addresses submitting reports. The mappings this month included 27 total addresses mapped, of which 14 were for newly identified testers.</p><p>Until next time, happy testing<nobr> <wbr></nobr>:)</p> barbie 2009-11-06T20:23:45+00:00 cpan OOPSLA 2009 <div><p> <a href="">OOPSLA 2009</a> happened a few weeks ago. OOPSLA stands for Object-Oriented Programming, Systems, Languages &amp; Applications and I&#39;ve always been quite interested in the conference. The proceedings of the conference aren&#39;t put online, but I&#39;ve managed to find two interesting papers:</p><p> <a href="">A Market-Based Approach to Software Evolution</a> (PDF) tries to imagine an open market which is targetted around fixing bugs and improving software. It&#39;s quite interesting as it&#39;s quite similar to a proposal from Nicholas on <a href="">spending other people&#39;s money</a>. The authors point out many potential flaws.</p><p> <a href="">The Commenting Practice of Open Source</a> (PDF) analyses projects on <a href="">Ohloh</a> and tries to spot commenting trends. &quot;We find that comment density is independent of team and project size&quot;, but they find that it varies from language to language. &quot;Java has the highest mean of comment lines per source lines at.. one comment line for three source code lines&quot; and &quot;Perl has the lowest mean with.. one comment line for nine source code lines&quot;. They list as future work to find out why this might be the case.</p></div> acme 2009-11-05T08:41:53+00:00 journal <p>I'm trying to compile a bunch of old perls to test my modules against them. I started with 5.8.9, and it went like this:</p><blockquote><div><p> <tt>&nbsp; &nbsp; $ git checkout -f perl-5.8.9<br>&nbsp; &nbsp; [git output]<br>&nbsp; &nbsp; $ git clean -xdf<br>&nbsp; &nbsp; [more git output]<br>&nbsp; &nbsp; $ sh Configure -Dprefix=/opt/perl/5.8.9 -des -Uinstallusrbinperl<br>&nbsp; &nbsp; [Configure output]<br>&nbsp; &nbsp; $ make &amp;&amp; make test &amp;&amp; make install<br>&nbsp; &nbsp; [make output]<br>&nbsp; &nbsp; [test output]<br>&nbsp; &nbsp; [install output]</tt></p></div> </blockquote><p>And, ta-da! After less than 15 minutes, Perl 5.8.9 was in<nobr> <wbr></nobr><i>/opt/perl/</i>, ready to be used.</p><p>Encouraged by this, I went on to compile 5.8.8. I was a bit disappointed when the same procedure (after a <code>s/5.8.8/5.8.9/g</code>) failed with:</p><blockquote><div><p> <tt>&nbsp; &nbsp; $ sh Configure -Dprefix=/opt/perl/5.8.8 -des -Uinstallusrbinperl<br>&nbsp; &nbsp; [Configure output]<br>&nbsp; &nbsp; Run make depend now? [y]<br>&nbsp; &nbsp; sh<nobr> <wbr></nobr>./makedepend MAKE=make<br>&nbsp; &nbsp; make[1]: Entering directory `/data/home/book/src/ext/perl'<br>&nbsp; &nbsp; sh writemain lib/auto/DynaLoader/DynaLoader.a&nbsp; &gt; perlmain.c<br>&nbsp; &nbsp; rm -f opmini.c<br>&nbsp; &nbsp; cp op.c opmini.c<br>&nbsp; &nbsp; echo&nbsp; av.c scope.c op.c doop.c doio.c dump.c hv.c mg.c reentr.c perl.c perly.c pp.c pp_hot.c pp_ctl.c pp_sys.c regcomp.c regexec.c utf8.c gv.c sv.c taint.c toke.c util.c deb.c run.c universal.c xsutils.c pad.c globals.c perlio.c perlapi.c numeric.c locale.c pp_pack.c pp_sort.c miniperlmain.c perlmain.c opmini.c | tr ' ' '\n' &gt;.clist<br>&nbsp; &nbsp; make[1]: Leaving directory `/data/home/book/src/ext/perl'<br>&nbsp; &nbsp;<nobr> <wbr></nobr>./makedepend: 1: Syntax error: Unterminated quoted string<br>&nbsp; &nbsp; make: *** [depend] Error 2<br> <br>&nbsp; &nbsp; If you compile perl5 on a different machine or from a different object<br>&nbsp; &nbsp; directory, copy the file from this object directory to the<br>&nbsp; &nbsp; new one before you run Configure -- this will help you with most of<br>&nbsp; &nbsp; the policy defaults.</tt></p></div> </blockquote><p>In the past, I had tried to compile older Perls, and S&#233;bastien Aperghis-Tramoni had pointed me to a few patches he had made to be able to compile 5.004_05 with a more recent (3.x) gcc.</p><p>So, by looking at his patch again and the output of <code>git diff perl-5.8.8 perl-5.8.9 -- makedepend.SH</code> I was able to produce a patch, that looked exactaly like S&#233;bastien's patches for 5.004_05. The <code>Configure</code> step now worked!</p><p>But then it's the compilation phase that failed:</p><blockquote><div><p> <tt>&nbsp; &nbsp; $ make<br>&nbsp; &nbsp; [make output]<br>&nbsp; &nbsp; &nbsp; &nbsp; Making IPC::SysV (dynamic)<br>&nbsp; &nbsp; Checking if your kit is complete...<br>&nbsp; &nbsp; Looks good<br>&nbsp; &nbsp; Writing Makefile for IPC::SysV<br>&nbsp; &nbsp; make[1]: Entering directory `/data/home/book/src/ext/perl/ext/IPC/SysV'<br>&nbsp; &nbsp; make[1]: Leaving directory `/data/home/book/src/ext/perl/ext/IPC/SysV'<br>&nbsp; &nbsp; make[1]: Entering directory `/data/home/book/src/ext/perl/ext/IPC/SysV'<br>&nbsp; &nbsp; cp<nobr> <wbr></nobr>../../../lib/IPC/<br>&nbsp; &nbsp; cp<nobr> <wbr></nobr>../../../lib/IPC/<br>&nbsp; &nbsp; cp<nobr> <wbr></nobr>../../../lib/IPC/<br>&nbsp; &nbsp;<nobr> <wbr></nobr>../../../miniperl "-I../../../lib" "-I../../../lib"<nobr> <wbr></nobr>../../../lib/ExtUtils/xsubpp&nbsp; -typemap<nobr> <wbr></nobr>../../../lib/ExtUtils/typemap&nbsp; SysV.xs &gt; SysV.xsc &amp;&amp; mv SysV.xsc SysV.c<br>&nbsp; &nbsp; cc -c&nbsp; &nbsp;-fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2&nbsp; &nbsp;-DVERSION=\"1.04\" -DXS_VERSION=\"1.04\" -fpic "-I../../.."&nbsp; &nbsp;SysV.c<br>&nbsp; &nbsp; SysV.xs:7:25: error: asm/page.h: No such file or directory<br>&nbsp; &nbsp; make[1]: *** [SysV.o] Error 1<br>&nbsp; &nbsp; make[1]: Leaving directory `/data/home/book/src/ext/perl/ext/IPC/SysV'<br>&nbsp; &nbsp; make: *** [lib/auto/IPC/SysV/] Error 2</tt></p></div> </blockquote><p>This looked a bit trickier. Luckily, Google informed me that the <i>asm/page.h</i> file had moved in the Linux tree. Using git again, I looked for changes involving <code>page.h</code>. The changes were a lot bigger, and harder for me to understand.</p><p>By chance, the first interesting diff I found was for a change in <i>ext/Devel/PPPort/devel/</i>. Wow, this looked exactly like the tool I needed to, erm, <i>build perl</i>.</p><p>So instead of adding complexity to my home-made perl-building script by adding more cases where a patch was necessary, I tried my hand at patching <b></b> to make it support newer gcc versions than the ones that existed at the time the older perls were written.</p><p>Looking for the proper patches with git is extremely easy, and I was quickly able to find the necessary patches. The longer part was to actually compile all Perls from 5.6.0 to 5.9.5.<nobr> <wbr></nobr><code>:-)</code> </p><p>Since works with perl archives, my test script ended up like this:</p><blockquote><div><p> <tt>&nbsp; &nbsp; #!/bin/sh<br> <br>&nbsp; &nbsp; # blead has my local patch<br>&nbsp; &nbsp; git checkout blead<br>&nbsp; &nbsp; git clean -xdf<br> <br>&nbsp; &nbsp; # setup temporary directories<br>&nbsp; &nbsp; rm<nobr> <wbr></nobr>/tmp/buildperl.log<br>&nbsp; &nbsp; rm -rf<nobr> <wbr></nobr>/tmp/perl<br>&nbsp; &nbsp; mkdir<nobr> <wbr></nobr>/tmp/perl<br>&nbsp; &nbsp; mkdir<nobr> <wbr></nobr>/tmp/perl/source<br> <br>&nbsp; &nbsp; # compile and test all the tags given on command-line<br>&nbsp; &nbsp; for tag in $* ; do<br> <br>&nbsp; &nbsp; &nbsp; &nbsp; # get the version<br>&nbsp; &nbsp; &nbsp; &nbsp; version=`echo $tag|cut -d- -f 2`<br> <br>&nbsp; &nbsp; &nbsp; &nbsp; # make a tarball<br>&nbsp; &nbsp; &nbsp; &nbsp; echo "=== creating<nobr> <wbr></nobr>/tmp/perl/source/perl-$version.tar.gz"<br>&nbsp; &nbsp; &nbsp; &nbsp; git archive --format=tar --prefix=$tag/ $tag^{tree} \<br>&nbsp; &nbsp; &nbsp; &nbsp; | gzip &gt;<nobr> <wbr></nobr>/tmp/perl/source/$tag.tar.gz<br> <br>&nbsp; &nbsp; &nbsp; &nbsp; perl cpan/Devel-PPPort/devel/ --config default --perl $version --test<br> <br>&nbsp; &nbsp; &nbsp; &nbsp; # check it was installed correctly<br>&nbsp; &nbsp; &nbsp; &nbsp; if [ -d<nobr> <wbr></nobr>/tmp/perl/install/default/$tag/ ] ; then<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; result="ok"<br>&nbsp; &nbsp; &nbsp; &nbsp; else<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; result="not ok"<br>&nbsp; &nbsp; &nbsp; &nbsp; fi<br>&nbsp; &nbsp; &nbsp; &nbsp; echo "$result - $version" &gt;&gt;<nobr> <wbr></nobr>/tmp/buildperl.log<br> <br>&nbsp; &nbsp; done</tt></p></div> </blockquote><p>Then I just fed it with the appropriate git tags, and let it run for a while:</p><blockquote><div><p> <tt>&nbsp; &nbsp; $<nobr> <wbr></nobr>./compile_perl `git tag -l 'perl-5.8*'`</tt></p></div> </blockquote><p>So within one and a half hour, I was able to compile, test and install all the 5.8.x versions of Perl.</p><p>I already have added the changes that allow 5.6.x, 5.8.x and 5.9.x to compile again in modern environments, and just sent the patches to P5P and Marcus Holland-Moritz.</p><p>Eventually, I'll work on adding the needed patches for versions 5.004, 5.005, and send another patch batch.</p> BooK 2009-10-29T20:44:04+00:00 journal Switching perls <p> I have a few perls compiled and installed in<nobr> <wbr></nobr><tt>/opt/perl</tt>: </p><blockquote><div><p> <tt>$ ls<nobr> <wbr></nobr>/opt/perl<br>5.10.0&nbsp; 5.6.2&nbsp; 5.8.7&nbsp; 5.8.8</tt></p></div> </blockquote><p> A long time ago, I tried to set up an environment that would setup the proper PATH to always reach the perl I wanted when typing <tt>perl</tt> on the command-line. That involved a shell script, which of course couldn't change the environment of the outer shell, so it actually started another shell, resulting in the following mess: </p><blockquote><div><p> <tt>5271 pts/2&nbsp; &nbsp; Ss&nbsp; &nbsp; &nbsp;0:01 bash<br>6182 pts/2&nbsp; &nbsp; S&nbsp; &nbsp; &nbsp; 0:00&nbsp; \_<nobr> <wbr></nobr>/bin/bash<nobr> <wbr></nobr>./perlenv 5.10.0<br>6183 pts/2&nbsp; &nbsp; S&nbsp; &nbsp; &nbsp; 0:00&nbsp; &nbsp; &nbsp; \_<nobr> <wbr></nobr>/bin/bash</tt></p></div> </blockquote><p> I could also have moved a canonical symlink around, but this had the advantage that several independent shells could run different perls. </p><p> Anyway, that was unworkable until I realized I could change the current shell environment using aliases or shell functions. So, assuming my perl binaries live in<nobr> <wbr></nobr><tt>/opt/perl/$VERSION/bin</tt>, the following bash shell function does the trick: </p><blockquote><div><p> <tt>setperl ()<br>{<br>&nbsp; &nbsp; export PATH=`echo $PATH|sed -e "s:/opt/perl/[^/]*/bin:/opt/perl/$1/bin:"`<br>}</tt></p></div> </blockquote><p> And my <tt>~/.bashrc</tt> points to the perl I want to use by default. </p> BooK 2009-10-22T08:13:55+00:00 journal Games <div><p>A few weeks ago I was up in the hills about Geneva reminiscing with my sister about all the things we used to enjoy when we were smaller. When I was younger I used to really enjoy programming computer games, first on my 48K Spectrum and then later on in <a href="">STOS BASIC</a> and then <a href="">68000</a> assembly language on my Atari ST.</p><p>I haven&#39;t programmed a game in a very long time. However, I&#39;m an avid gamer, playing games while travelling on my DS and at home on my Xbox 360. I almost enjoy reading <a href="">Edge magazine</a> more than I like playing games.</p><p>At YAPC::Europe in Lisbon, Domm pointed out that the <a href="">Perl SDL project</a> (which wraps the <a href="">Simple DirectMedia Layer</a>) was languishing and that we should all programs games in Perl.</p><p>A few months later I got around to playing with SDL and made a simple <a href="">breakout clone</a> which I styled after <a href="">Batty</a> on the Spectrum, but with gravity. It was fairly easy to program, but there was a lot to grasp. The Perl libraries are a mix between a Perl interface to SDL and a Perlish interface to SDL, with limited documentation, tests and examples.</p><p>Of course this is where I join the #sdl IRC channel on and start discussing with the other hackers (kthakore, garu, nothingmuch). We decide on a major redesign to split the project into two sections: the main code will just wrap SDL and then there will be another layer which makes it easier to use. I&#39;ve started writing a bunch of XS on the <a href="">redesign branch</a> of the repository while trying to keep <a href="">Bouncy (my game)</a> still working. There is a bunch of work still to do but we&#39;ve made a good start. This is what Bouncy looks like at the moment:<br> <br> <a href="">[YouTube video]</a><br> <br> The physics are pretty fun and it runs pretty fast (1800 frames/second). I'm taking a little break as I'm off to Taipei...</p></div> acme 2009-10-19T07:34:03+00:00 journal Mail::Box++ <p> So <a href="">sferics' disk died</a> and took everything that wasn't backed up with it. It turns out that one of the important configuration items that weren't backed up were the configurations and subcribers lists of the Mailman mailing-lists. </p><p> To get the subscribers lists back, we didn't have many options: </p><ul> <li>Resubscribe all posters, which would also subscribe the ones who did unsubscribe at some point, and kick out all the lurkers... <i>Not so good.</i> </li><li>Process all the administrative messages to get a chronological list of subscribe/unsubscribe action, and rebuild the list from there.</li><li>Seed the list with the list of posters, if possible the subscribed ones (i.e. not the one whose mails were passed through moderation)</li></ul><p> We use Mailman for managing our mailing-lists. It works well enough for us, but it has its share of hate: </p><ul> <li>Localized admin messages are nice, but a <tt>X-Mailman-Action</tt> header would be nicer, so we would have to process the body of each actions, in all languages we might have used</li></ul><p> Luckily, it also has a few helpful headers: </p><ul> <li> <tt>List-Id</tt>, so we can process a folder full of admin messages and know about which list each message is about</li><li> <tt>X-Mailman-Approved-At</tt>, so we can detect messages sent by non-subscribed posters</li></ul><p> I used <a href="">Mail::Box</a> to create two scripts to process the messages (in French and English, so anyone who subscribed in German is out of luck) and provide some useful info (the people with the archives and the admin-fu did the actual work of fixing the lists). </p><p> Sure, <tt>Mail::Box</tt> is slow, and the interface is a bit complicated, but on the other hand it is well documented, and it's correct. </p><p> Summary at the end of the day^Wlong week-end: </p><ul> <li>(incomplete backups)--</li><li>Mailman-- # misses useful headers that would have made the job easier</li><li> <tt>Mail::Box</tt>++ # gets the job done in 20 lines</li></ul> BooK 2009-10-14T07:00:40+00:00 journal YAPC Conference Surveys - Future Plans <p>Now that I've got the <a href="">YAPC::NA</a> and <a href="">YAPC::Europe</a> surveys done and dusted, I can now start looking at some of the changes I want to make to the survey system. Some I have already been planning, while others have been suggested by people who have taken the surveys. </p><p> <b>Paint By Number Heart</b> </p><p>The YAML file used to drive the surveys was originally designed to not reference any question numbers, with the code creating these on the fly to ensure no duplicates. To begin with this worked fine, but there are now such a variety of questions, including some that relate back to others, that knowing the question number ahead of time is useful. This is a fairly minor change, so shouldn't be too tricky to implement.</p><p> <b>Take It To The Limit</b> </p><p>I've had a few comments from people regarding the number of times they have attended a particular conference or workshop. Seeing as some have now been running for many years, if you've been to all of them, and can't remember how many that is, some have entered big numbers to imply they've been to all of them. As such, in future I'm going to add the feature that allows you to write 'ALL', as well as including the max number of occurances in the question label. Then if anyone does enter a rather large number or 'ALL' it'll get reset to the correct maximum.</p><p> <b>Who Can It Be Now?</b> </p><p>With the talk evaluations, a few speakers have asked whether I can tell them who wrote what in their evaluation results. Due to the nature of submitting everything anonymously I currently don't include any contact details with the responses. Having said that, there is no reason why anyone submitting an evaluation can't give their conscent to being identified, so that speakers can respond to the individual if they wish. The speakers concerned have had genuine reasons for wishing to contact individuals, so from next year there will be an extra tick box if you would like to be identified to the speaker. The default will still be anonymous submission, so it will be entirely the individual's choice as to whether they identify themselves or not.</p><p> <b>Just Like You Imagined</b> </p><p>I started writing a specification for the Survey YAML file during the Birmingham YAPC::Europe in 2006, but haven't kept it up to date with the changes that have been made in more recent times. I plan to complete this so that anyone reading the YAML files can make sense of them. It also might be useful for others to suggest new features.</p><p> <b>P Machinery</b> </p><p>I have the raw data from all the surveys since 2006 in SQL form. However, I want to make it more freely available and accessible, so others can analyse the data in different ways. Together with the raw data itself, this then needs the YAML file, the Survey specification and an example translation program to help understand how the data maps to the questions and what questions relate to each other. Essentially this just means me cleaning up the program I use to prepare the results and documenting everything.</p><p> <b>Speak My Language</b> </p><p>At the moment the survey site is written and presented in English. However, it's been long over due for the questions and text to be in a variety of languages. So although I don't speak other languages (at least not well enough to be competent in them), I'm hoping others will be willing to help out with some translations. Before I get to that stage though, I need to add support to allow for detecting and changing between languages. </p><p>Part of the reason for doing this is to allow YAPCs (and workshops) in other countries to take advantage of the surveys, if they wish to. At the moment I've only been working with the YAPC::Europe and YAPC::NA teams, as they have been the conferences I've attended, and are predominantly English language based. But these days I don't need to be there to run the surveys, and the Perl community pretty much covers the whole world, so why not accumulate knowledge from other events. At the very least, regular organising teams for workshops should then have the opportunity to get feedback to improve their events too.</p><p> <b>Time Stand Still</b> </p><p>It doesn't, but I often wish it would! A couple of people have commented that it takes such a long time between the end of the conference and the close of the surveys. This year I reduced it to 6 weeks, but it can then take an additional couple of weeks to process all the results, create graphs, prepare the website pages, write the documents for organisers and generate the emails for all the speakers. Much of it is automated now, but there are several tests and tweaks that need to happen to get it looking right. This will be improved further with the question numbering change, as I can fine tune the templates to particular questions.</p><p>As for improving the response rate, it often takes people some time to collect their thoughts. Typically within the first 2 weeks after the conference the bulk of responses are received. The last week sees another big increase in responses, when I send out the '1 week left' reminder. The weeks inbetween still have a steady trickle of responses, but it is slow. By way of example below are the number of responses received each day for the YAPC::Europe survey. The last 2 columns indicate the number of survey responses and the number of talk evaluation responses respectively:</p><blockquote><div><p> <tt>Mon&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&lt;-- evaluations opened<br>Tue,&nbsp; 4th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;-&nbsp; 27<br>Wed,&nbsp; 5th August&nbsp; &nbsp; 2009&nbsp; &nbsp; 36 234&nbsp; &lt;-- last day of conference<br>Thu,&nbsp; 6th August&nbsp; &nbsp; 2009&nbsp; &nbsp; 34 119<br>Fri,&nbsp; 7th August&nbsp; &nbsp; 2009&nbsp; &nbsp; 31 101&nbsp; &lt;-- end of tutorials<br>Sat,&nbsp; 8th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;6&nbsp; 55<br>Sun,&nbsp; 9th August&nbsp; &nbsp; 2009&nbsp; &nbsp; 14&nbsp; 47<br>Mon, 10th August&nbsp; &nbsp; 2009&nbsp; &nbsp; 17&nbsp; 28<br>Tue, 11th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;9&nbsp; 58<br>Wed, 12th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;4&nbsp; 34<br>Thu, 13th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;3&nbsp; 13<br>Fri, 14th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;3&nbsp; 18&nbsp; &lt;-- end of week 1<br>Sat, 15th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;3<br>Sun, 16th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;2&nbsp; 11<br>Mon, 17th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;4&nbsp; &nbsp;2<br>Tue, 18th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;2&nbsp; &nbsp;-<br>Wed, 19th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;2&nbsp; 33<br>Thu, 20th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;2&nbsp; &nbsp;-<br>Fri&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&lt;-- end of week 2<br>Sat<br>Sun<br>Mon, 24th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;7<br>Tue, 25th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;-<br>Wed<br>Thu<br>Fri&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&lt;-- end of week 3<br>Sat<br>Sun, 30th August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;6<br>Mon, 31st August&nbsp; &nbsp; 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;-<br>Tue<br>Wed,&nbsp; 2nd September 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;-<br>Thu,&nbsp; 3rd September 2009&nbsp; &nbsp; &nbsp;-&nbsp; 13<br>Fri&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&lt;-- end of week 4<br>Sat<br>Sun<br>Mon<br>Tue<br>Wed<br>Thu, 10th September 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;-<br>Fri, 11th September 2009&nbsp; &nbsp; &nbsp;1&nbsp; &nbsp;-&nbsp; &lt;-- end of week 5<br>Sat<br>Sun<br>Mon<br>Tue, 15th September 2009&nbsp; &nbsp; 23&nbsp; 48&nbsp; &lt;-- reminder sent out<br>Wed, 16th September 2009&nbsp; &nbsp; &nbsp;2&nbsp; &nbsp;1<br>Thu, 17th September 2009&nbsp; &nbsp; &nbsp;3<br>Fri, 18th September 2009&nbsp; &nbsp; &nbsp;2&nbsp; 46&nbsp; &lt;-- end of week 6<br>Sat, 19th September 2009&nbsp; &nbsp; &nbsp;2&nbsp; &nbsp;8<br>Totals:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 209 912</tt></p></div> </blockquote><p>There is indeed a lull in the middle 3 weeks, so next year I do plan to try and reduce the availability of the surveys to at most 4 weeks. With the improvements to the evaluation and preparation code, it should then take about a week at most to review the data and then publish the results.</p><p>However, although my aim is to improve the turnaround of conference to results, there are a few other factors involved. All of my time taken to administer the surveys (before, during and after each conference) needs to accommodate my employer and family too. The conferences are typically around holiday season, so juggling everything to get the results done in a timely manner can be fun! Please bear this in mind if I don't fit into your personal schedule.</p><p> <b>Release The Bats</b> </p><p>It's always been my intention to release the code that runs the survey system as Open Source, however it means releasing two projects as Open Source, not one. The survey code is built on top of Labyrinth, a web management system I started developing in 2002 (2001 if you include the prototype). and is now used to run several CPAN Testers websites, as well as, GlousLUG and many other sites I run. Having a developer of 1 has meant it has taken a while for the code base to reach a stable state, which it has been now for a couple of years. It's not Catalyst or Jifty, although the core could potentially be abstracted and released as such. However, that's not what I'm going to release. Essentially I'll be releasing Labyrinth in two parts; the Perl library code and the supporting data files. On top of that I can then release the YAPC Survey code with the appropriate templates, plugins and additional data files.</p><p>Whether anyone then collaborates or not is another matter, but at least they'll have the chance to. People running smaller events will also have the ability to download, install and run the code themselves to administer their own surveys. </p><p> <b>Pretending To See The Future</b> </p><p>If you're an event organiser and think that the YAPC Survey system would be something you could use, please get in touch with any ideas you'd like to see featured. Also if you took any of the surveys this year, and think that they could be improved, whether it's simply rewording questions, adding questions, or some functionality that could be improved, please let me know.</p><p>I am very grateful for the support of all the conferences organisers, Eric Cholet for writing a new ACT API for me, and several speakers who have sent me personal thanks for supplying them with feedback to their talks, so it's obviously something that is being appreciated by the majority (though not all sadly). The split between the main survey and the talk/tutorial surveys seems to have worked well, and the majority of improvements so far have gone down well. There is still room for improvement, and hopefully the changes above will make for more feedback next year.</p> barbie 2009-10-06T13:02:52+00:00 journal <p> I just came back from <a href=""></a> (Open Source Developers Conference, France). This was the first edition of the conference, organized jointly by <a href="">Les Mongueurs de Perl</a>, <a href="">AFPy</a> (Association Francophone Python) and <a href="">Ruby France</a>. </p><p> There were a lot more people on the Saturday than on the Friday, probably because many people were at work and it was hard for them to get work to let them go on company time. Or something. </p><p> In my view, as an attendee and an organizer <small>(I had a tiny tiny role in the organizing team)</small>, it was a great success. I saw presentations about: </p><ul> <li>OpenSUSE on the Gdium</li><li>Cucumber (Ruby)</li><li>Hadoop</li><li>Seaside (Smalltalk)</li><li>Moose (Perl)</li><li>JavaScript</li><li>MySQL</li><li>Acmeism (Ingy d&#246;t Net's new religion)</li><li>Dancer (yet another Perl web framework)</li><li>Why one shouldn't say that "reinventing the wheel" is bad, but "reinventing the toothbrush" clearly is</li><li>And a few others...</li></ul><p> The Seaside presentation was really an eye-opener showing how clean web development can be (and how exciting Smalltalk is), even though I'll probably never use it. </p><p> I keep hearing about Moose and Catalyst, but am too lazy and busy to really start investing and learning about those. So going to the Moose talk at least kept me informed. </p><p> I realized Acmeism is the religion of<nobr> <wbr></nobr><tt>;-)</tt> </p><p> <a href="">Dancer</a> is a new lightweight web framework (ported from Ruby's <a href="">Sinatra</a>) and I really want to try it now. Maybe some of the websites' idea I have will finally see the light! </p><p> In the hallway track, I spent some time talking about gender issues with a woman (not a developer, but having been involved in a few projects with geeks) that was passing by and stopped to talk with us. I also spent a lot of time showing the power of Git to other Perl Mongers. In the end, I took over an empty slot to tour the Git object database and reply questions from the audience... I really should work on the Git tutorial I have in mind. (By the way, O'Reilly Media's <i> <a href="">Version Control with Git</a> </i> contains everything I would like to find in a Git tutorial, and more.) </p><p> All in all, excellent conference; I got to meet the people from over the fences and you won't believe how much they look like us.<nobr> <wbr></nobr><tt>;-)</tt> </p> BooK 2009-10-05T23:21:12+00:00 journal CPAN Testers Summary - September 2009 - Wind And Wuthering <p>Cross-posted from the <a href="">CPAN Testers Blog</a>.</p><p>Last month was a fairly quiet month in terms of development, as projects such as the call for <a href="">CPAN Meta Spec change proposals</a> were opened on the <a href="">QA Wiki</a>, and the <a href="">NA</a> and <a href="">Europe</a> <a href="">YAPC Conference Surveys</a> were unveiled. However, there are some statistics that are planning to see the light of day soon, thanks to <a href="">Tim Bunce</a> putting together an updated <a href=";;">Perl Myths</a> talk for <a href="">OSSBarCamp</a> in Dublin last month. Expect to see them on the <a href="">CPAN Statistics site</a> some time during the month.</p><p>The CPAN Testers have been continuing to make headway through the uploaded modules, and I'm also pleased to say that the builder keeping the <a href="">Reports</a> <a href="">sites</a> up to date, has been managing the page requests very well this month, despite such a large volume of reports being submitted and continued interest in the site.</p><p>After all the news featured in the <a href="">August Summary</a>, it's not too surprising we've not had too much to report for September.</p><p>Last month we had a total of 161 testers submitting reports. The mappings this month included 22 total addresses mapped, of which 11 were for newly identified testers.</p><p>As I've mentioned previously, if you're planning to present a testing related talk at a forthcoming workshop or technical event, please let me know and I'll get it posted on here too.</p><p>That's all for this month's summary, so until the next post, happy testing<nobr> <wbr></nobr>:)</p> barbie 2009-10-05T19:10:25+00:00 journal ImageMagick / convert / --no-globbing! <p>I've recently reworked an image creation script at work to use <a href="">ImageMagick</a>'s own <a href=""> <code>convert</code> </a> script. The main reason being that text support is much better using <code>convert</code> than calling ImageMagick through its <a href="">Perl API</a>. However, it threw up a rather confusing issue, that has taken a while to track down and resolve.</p><p>The command issued for a number of images is along the lines of the following:</p><blockquote><div><p> <tt>convert -background "#ffffff" -fill "#000000" -font Helvetica -pointsize 14 -size 400x caption:"*" 9780596001735.png</tt></p></div> </blockquote><p>The 'caption' is the piece of text we wish to have in the image. In most cases this is 1 or 2 short sentences, but in some cases it can include a single asterisk, as above. This has the confusing result of creating many files with the text in each a different file name as found in the current directory. It's perhaps not too confusing to realise that filename expansion has occurred. However, the asterisk is quoted, so shouldn't be expanded by bash. After a bit of investigation and various attempts to check quoting in the shell, I discovered that ImageMagick's <a href="">own documentation</a> has this to say regarding <code>convert</code>...</p><blockquote><div><p> <i>In Unix shells, certain characters such as the asterisk (*) and question mark (?) automagically cause lists of filenames to be generated based on pattern matches. This feature is known as globbing. ImageMagick supports filename globbing for systems, such as Windows, that does not natively support it. For example, suppose you want to convert 1.jpg, 2.jpg, 3.jpg, 4.jpg, and 5.jpg in your current directory to a GIF animation. You can conveniently refer to all of the JPEG files with this command: </i> </p><blockquote><div><p> <tt>$magick&gt; convert *.jpg images.gif</tt></p></div> </blockquote></div> </blockquote><p>In the above command this would actually be expanded by the shell on Unix like systems, not <code>convert</code>, as there is no quoting around the asterisk. However, ImageMagick in its desire to be "helpful" gleefully ignores any quoting and does the filename expansion (or globbing as they call it) regardless. Hence why in my original command several hundred files could be generated.</p><p>Understanding this, I then set about trying to pass the character in hex form, escaping with '\' and quoting in a variety of ways to just get the single asterisk written as the text, all to no avail. I hunted through the online documentation (both on ImageMagick's site and in other forums) to find a solution, but drew a blank. I had expected to find a command-line option such as '--no-globbing' or similar, that would suppress the filename expansion feature, but alas no.</p><p>Through a bit of trial and error I finally discovered the solution:</p><blockquote><div><p> <tt>convert -background "#ffffff" -fill "#000000" -font Helvetica -pointsize 14 -size 400x caption:"* " 9780596001735.png</tt></p></div> </blockquote><p>Can you see the difference? All it took was a magic(k?) space character after the asterisk for <code>convert</code> to suppress the filename expansion! </p><p>Nowhere in the manual, docs and online help is there any mention of this. Perhaps I'm the first to encounter this, but I doubt it. As a way to help others who might also come across this frustration, I'm posting it here. I've submitted a bug report to the ImageMagick Wizards, so hopefully it may get considered for a future release. However, for now this looks to be the only way to get it working as intended.</p> barbie 2009-09-30T12:19:59+00:00 journal Test::Database, for CPAN testers <p> Hopefully, a week after my <a href="/~BooK/journal/39660">Test::Database, for CPAN Authors</a>, some CPAN authors have started to use <a href="">Test::Database</a>.<nobr> <wbr></nobr>;-) </p><p> <a href="">Test::Database</a> is fine to test database independence on your local setup, but to really leverage the power of CPAN Testers to fully test your module over all types of setups, it needs to be installed <i>and configured</i> on a sufficient number of tester hosts. </p><p> During my <a href="">YAPC Europe 2009</a> <a href="">talk about Test::Database</a>, I invited several CPAN testers to attend. This is a chicken and egg type situation: this is a useful module, but CPAN authors will use it if they know it will be available on a reasinable number of testing hosts. On the other hand, few CPAN tester will bother configuring it if noone's using it anyway. </p><p> This post is basically a replay of my YAPC::Europe talk: last week I invited CPAN authors to try out <a href="">Test::Database</a>, and this week I invite CPAN testers to install <i>and configure</i> <a href="">Test::Database</a> as part of the toolchain installed on their smokeboxes. </p><p> And the documentation contains a <a href="">nice tutorial</a>, so it shouldn't be too hard. </p> BooK 2009-09-29T12:04:29+00:00 journal YAPC::Europe 2009 Survey - Results Now Online <p>At the beginning of August 2009, the 10th Annual YAPC::Europe took place. In the following weeks attendees were asked to complete surveys for the talks, tutorials and the conference as a whole. I'm pleased to announce that the results of the <a href="">YAPC Conference Survey</a> for the <a href="">YAPC::Europe 2009</a> event are <a href="">now available online</a>.</p><p>The additional comments and suggestions given via the feedback forms have been sent to the organisers, as well as to next year's organisers, hopefully giving them the opportunity to refine their ideas to improve the conference experience for everyone in the future.</p><p>In addition, the results of the talk and tutorial evaluation forms have also been sent out to the respective speakers. If you were a speaker in Lisbon and haven't received an email from me by the end of today, check your spam box first, then contact me if you still haven't found anything.</p><p>My thanks to the organisers for letting me run the survey for YAPC::Europe this year, and many thanks to everyone who responded to the main survey, as well as all the evaluation surveys.</p><p>If you have suggestions for improving the surveys, please let me know.</p> barbie 2009-09-28T12:22:56+00:00 journal Test::Database, for CPAN authors <p> About a year ago, I realized there was no good way to test code that claims to be database independent. Even testing code that needs <i>a</i> database is difficult: most modules either use SQLite (but they don't test the database independence) or request some environment variables defining the DSN to be setup (but these are unlikely to exist on a CPAN tester's machine, though). </p><p> So I decided to write <a href="">Test::Database</a>. </p><p> It took me a while to get right, but thanks to the isolation and focus that the <a href="">Birmingham QA Hackathon</a> from March 2009 gave me (and the completely broken state I put the module into at that time), I managed to come up with a satisfying version 1.00 in July this year (the module is now at version 1.06). </p><p> Basically, what <tt>Test::Database</tt> does is simply this: <i>it gives you the information you need to connect to a database that matches your testing needs</i> (implicitely allowing you to do whatever you want in there, including creating and dropping tables). </p><p> <tt>Test::Database</tt> has a single interface for test authors: </p><blockquote><div><p> <tt>&nbsp; &nbsp; my @handles = Test::Database-&gt;handles( @requests );</tt></p></div> </blockquote><p> <tt>@request</tt> is a list of "requests" for databases handles. Requests must declare the DBD they expect, and can optionaly add version-based limitations (only available for drivers supported by <tt>Test::Database</tt>). </p><p> <tt>Test::Database</tt> will return as many matching <tt>Test::Database::Handle</tt> objects as it can find or create, depending on the locally installed modules and the configuration of the user running the code. </p><p> You can then simply use the information from the <tt>Test::Database::Handle</tt> object: </p><blockquote><div><p> <tt>&nbsp; &nbsp; # $handle is a Test::Database::Handle object<br> <br>&nbsp; &nbsp; # get all the info you need and DIY<br>&nbsp; &nbsp; my ( $dsn, $user, $pass ) = $handle-&gt;connection_info();<br>&nbsp; &nbsp; my $dbh = DBI-&gt;connect( $dsn, $user, $pass );<br> <br>&nbsp; &nbsp; # be lazy and let it do the DBI-&gt;connect( $dsn, $user, $pass ) for you<br>&nbsp; &nbsp; my $dbh = $handle-&gt;dbh();</tt></p></div> </blockquote><p> So once you've added support for MySQL to your module (in addition to SQLite), you can simply edit the test script like so: </p><blockquote><div><p> <tt>&nbsp; &nbsp; my @handles = Test::Database-&gt;handles( 'SQLite', 'mysql' );</tt></p></div> </blockquote><p> And your test suite will pick up a MySQL handle wherever there's one available. </p><p> <small> Many thanks to <a href="">SUKRIA</a>, who'll be the first CPAN author to really use Test::Database to test a CPAN module (namely, <a href="">Coat::Persistent</a>, when the latest release finally hits CPAN). </small> </p> BooK 2009-09-22T01:40:22+00:00 journal