Leader of Birmingham.pm [pm.org] and a CPAN author [cpan.org]. Co-organised YAPC::Europe in 2006 and the 2009 QA Hackathon, responsible for the YAPC Conference Surveys [yapc-surveys.org] and the QA Hackathon [qa-hackathon.org] websites. Also the current caretaker for the CPAN Testers websites and data stores.
If you really want to find out more, buy me a Guinness
After 6 months of development work, following 2 years worth of design and preparation, CPAN Testers 2.0 is finally live.
With the rapid growth in CPAN Testers environments and testers over the past few years, the previous method of posting reports to a mailing list had reached a point where the scalability was no longer viable. This was recognised several years ago and discussions for a new system had already begun, with the view that reports should be submitted via HTTP.
At the Oslo QA Hackathon in 2008, David Golden and Ricardo Signes devised the Metabase, with the design work continuing at the Birmingham QA Hackathon in 2009, where David and Ricardo were able to bring others into the thought process to work through potential issues and begin initial coding. A number of releases to CPAN and Github followed, with more people taking an interest in the project.
The Metabase itself is a database framework and web API to store and search opinions from anyone about anything. In the terminology of Metabase, Users store Facts about Resources. In the Metabase world, each CPAN tester is a User. The Resource is a CPAN distribution. The Fact is the test report. Today that’s just the text of the email message, but in the future it will be structured data. The Metabase specifies data storage capabilities, but the actual database storage is pluggable, from flat files to relational databases to cloud services, which gives CPAN Testers more flexibility to evolve or scale over time.
Meanwhile the CPAN Testers community was also attracting more and more interest from people wanting to be testers themselves. As a consequence the volume of reports submitted increased each month, to the point that the perl.org mail server was struggling to deal with all the mailing lists it hosted. The cpan-testers mailing list was submitting more posts in one day than any other list submitted in a month (in a year in some cases). Robert and Ask, very reasonably, asked if the testers could throttle their submissions down to 5k report posts a day, and set a deadline of 1st March 2010 to switch off the mailing list.
David Golden quickly took on the task to envisage a project plan, and work began in earnest in December 2009. With less than 3 months to the cut-off date, there was a lot of work to do. David concentrated on the Metabase, with Barbie working on ensuring that the current cpanstats database and related websites could move to the Metabase style of reports. Despite a lot of hard work from a lot of people, we unfortunately missed the 1st March deadline. Having throttled report submissions to a more manageable level, and although not complete, the target for HTTP submissions was in sight, Robert and Ask were very understanding and agreed to keep us going a little while longer.
Throughout March and April a small group of beta testers were asked to fire their submissions at the new system. It ironed out many wrinkles and resulted in a better understanding of what we wanted to achieve. The first attempts at retrieving the reports from the Metabase into the cpanstats database began in April, and again highlighted further wrinkles that needed to be addressed. After a month of hard testing and refinement, we finally had working code that went from report submission by a tester, storage into the Metabase, retrieval into the cpanstats database and finally presentation on the CPAN Testers family of websites.
During June the process was silently switched from testing to live, allowing reports to be fed through into the live websites. Due to the ease with which the new style reporting fit into the existing system, the switch largely went unnoticed by the CPAN testers community as well as the Perl community. A considerable success.
The CPAN Testers eco-system is now considerably larger than those early days of simply submitting handwritten reports by email to a mailing list, and the work to get here has featured a cast of thousands. Specifically for CPAN Testers 2.0, the following people have contributed code, ideas and effort to the project over the past six months:
Barbie and David would like to thank everyone for their involvement. Without these guys CPAN Testers 2.0 would not have been possible. Thanks to everyone, we can now look forward to another 10 years and more of CPAN Testers.
CPAN Testers now holds over 7.5 million test reports covering nearly 11 years worth of testing Perl distributions. There have been over 1,000 testers in that time, and every single one has helped the CPAN Testers project to be the largest single community supported testing system of any programming language. For a full list of everyone who has contributed, visit the CPAN Testers Leaderboard. A huge thank you to everyone.
With the Metabase now online and live, we can now announce an absolute deadline to close the mailing list. This is currently set as 31st August 2010. After this date all submissions via email will be rejected, and testers will be encouraged to upgrade their testing tools to take advantage of the new HTTP submission system. Many of the high volume testers have already moved to the new system, and we expect nearly everyone else to move in the next month. We will be tailing the SMTP submissions to catch those who haven't switched, such as some of the more infrequent testers, and warn them of the deadline.
More work is planned for CPAN Testers, from further validation and administration of reports, to providing more functionality for alternative analysis and search capabilities. Please check the CPAN Testers Blog for our regular updates.
If you'd like to become a CPAN Tester, please check the CPAN Testers Wiki for details about setting up a smoke testing environment, and join the cpan-testers-discuss mailing list where many of the key members of the project can offer help and advice.
You can find out more about CPAN Testers at two forthcoming conferences. David Golden will be presenting "Free QA! What FOSS can Learn from CPAN Testers" at OSCON and Barbie will be presenting "CPAN Testers 2.0 : I love it when a plan comes together" at YAPC::Europe.
CPAN Testers is sponsored by Birmingham Perl Mongers, and supported by the Perl community.
You can now download the full and complete Press Release from the CPAN Testers Blog. If you have access to further IT news reporting services, please feel free to submit the Press Release to them. Please let us know if you are successful it getting it published.
Cross-posted from the CPAN Testers Blog
Event: Birmingham.pm Technical Meeting
Date: Wednesday 26th May 2010
Times: from 7pm onwards (see below)
Venue: The Victoria, 48 John Bright Street, Birmingham, B1 1BN.
This month we welcome a returning guest speaker, Richard Wallman, who will be taking a look at how Catalyst has eased the development lifcycle of websites, from his own experiences. In addition I'll be looking at the progress of the CPAN Testers 2.0, and looking at some of the near future plans for CPAN Testers.
As per usual, this month's technical meeting will be upstairs at The Victoria. The pub is on the corner of John Bright Street and Beak Street, between the old entrance to the Alexandra Theatre and the backstage entrance. If in doubt, the main entrance to the Theatre is on the inner ring road, near the Pagoda roundabout. The pub is on the road immediately behind the main entrance. See the map link on the website if you're stuck.
As always entry is free, with no knowledge of Perl required. We'd be delighted to have you along, so feel free to invite family, friends and colleagues
Some of us should be at the venue from about 7.00pm, usually in the backroom downstairs. Order food as you get there, and we'll aim to begin talks at about 8pm. I expect talks to finish by 9.30pm, with plenty of time for discussion in the bar downstairs.
Venue & Directions:
The venue is approximately 5-10 minutes walk from New Street station, and about the same from the city centre. On street car parking is available see full details and directions on the website.
These are the rough times for the evening:
Please note that beer will be consumed during all the above sessions
Cross-posted from the CPAN Testers Blog.
Last month CPAN Testers was finally given a deadline to complete the move away from SMTP to HTTP submissions for reports. Or perhaps more accurately to move away from the perl.org servers, as the amount of report submissions has been affecting support of other services to the Perl eco-system. The deadline is 1st March 2010, which leaves just under 2 months for us to move to the CPAN Testers 2.0 infrastructure. Not very long.
David Golden has now put together a plan of action, which is being rapidly consumed and worked on. The first fruits of which has been an update to the CPAN Testers Reports site. The ID previously visible on the site, refering to a specific report, is now being hidden away. The reason for this is that the current ID refers to the NNTP ID that is used on the perl.org NNTP archive for the cpan-testers mailing list. This ID is specific to the SMTP submissions and includes many posts which are not valid reports. As such we will be moving to a GUID as supplied by the Metabase framework, with existing valid SMTP submitted reports being imported into the Metabase. The NNTP ID will eventually be completely replaced by the Metabase GUID across all parts of the CPAN Testers eco-system, including all the databases and websites. As such you will start to see a transition over the next few weeks.
The second change which has now been implemented, is to present the reports via the CPAN Testers Report site and not the NNTP arcive on the perl.org servers. Currently the presentation of a report (e.g. this report for App-Maisha) is accessed via the reports pages for a distribution or an author, but will also be accessible in a similar manner across all the CPAN Testers websites. There are a large batch of early reports that are currently missing from the database, but these are being updated now, and will hopefully be complete within the next few days. If you have any issues with the way the reports are presented, including any broken or missing links from other parts of the site, please let me know.
In all this change, there is one aspect that may worry a few people, and that is the "Find A Tester" application. For the next few months it will still exist, but the plan is to make the Reports site more able to provide tester contact information. In addition to this the testers themselves will soon have the ability to update their own profiles. Initially this will be used to link email addresses to reports and then map those email addresses to a profile held wihtin the Metabase, but in the longer term will be used to help us manage the report submissions better.
David Golden is concentrating on the Client and Metabase parts of the action plan, and I am working on porting the websites and 'cpanstats' database. If you have any free time and would like to help out, please review the action plan, join the cpan-testers-discuss mailing list, and please let us know where you'd like to help. There is a lot of work to be done and the more people involved, the better the spread of knowledge in the longer term.
After David announced the deadline last month, all the testers have throttled back their smoke bots. This saw a dramatic reduction in the number of reports and page being processed, and enabled the Reports Page Builder to catchup with itself, to the point it was frequently having less than a 1000 request waiting. That changed yesterday with the changes to the website, as every page now needs to be updated. It typically takes about 5 days to build the complete site, so this quiet period will help allow the Builder to rebuild the site, without adversely affecting the currently level of report submissions. Expect the site to reach a more managable level of processing some time next week. To help monitor the progress of the builder, a new part of the Reports site, The Status Page, now checks the status of all outstanding request every 15 minutes, providing a 24 hour persepctive and a week long perspective.
A new addition to the family was also launched recently, the CPAN Testers Analysis site, which Andreas König has been working on, to help authors identify failure trends from reports for their distributions. Read more on Andreas' blog.
Last month we had a total of 168 tester addresses submitting reports. The mappings this month included 22 total addresses mapped, of which 2 were for newly identified testers. Another low mapping month, due to work being done on CPAN Testers as a whole.
My thanks this month go to David Golden for finding the time to write an action plan, and his wife for allowing him the time to write it, as well as working on all the other areas involving the CPAN Testers and the Metabase
Cross-posted from the CPAN Testers Blog.
In November we reached the 6 million reports submitted mark. It's quite staggering how many reports are being submitted these days. It's now roughly 1 million reports every 3 months! So expect a 10 million reports post some time in August 2010
Now that we are producing so many reports, while there is a desire to get more reports from less tested operating systems, Tim Bunce recently highlighted his interest in getting reports that included a diverse set of Perl configuration flags, in particular regarding how Perl was compiled (with and without threads, etc). At the moment the CPAN Testers Statistics database doesn't include that information, but the Metabase that is behind CPAN Testers 2.0 will. In addition the Metabase will be able to be queried to glean the reports that contain a specific set of flags, etc. At the moment there are quite a few different setups testing on the top few operating systems being tested. While some authors see these as just repeated results, in some cases they provide slight differences in the test results. This is particularly what Tim was interested in for Devel-NYTProf. Hopefully we'll be closer to getting more of that information more readily available soon. In the meantime, if you do want to get involved with CPAN Testers, and only have a traditional operating system available, take a look at some of the reports posted by current testers for the same platform, and see what different setups you could provide.
In the CPAN Testers namespace, CPAN has seen a new upload, CPAN-Testers-Data-Addresses. This release will be the new way for me to manage the tester address mappings. To begin with the testing is being run stand-alone, but it will be shortly be integrated to the CPAN Testers Statistics website. From there it will also be integrated into the new site that is hopefully being launched early next year, which will allow testers to register their testing addresses (among other things). More uploads to the CPAN Testers namespace are being worked on, in particular ones to provide a more programmatic access to the CPAN Testers APIs. More news on those hopefully next month.
This weekend sees the annual London Perl Workshop. Featured in the schedule is Chris 'BinGOs' Williams' talk "Rough Guide to CPAN Testing". If you are a CPAN Tester and are planning to attend the event, please come along and say hello
Last month we had a total of 164 tester addresses submitting reports. The mappings this month included 17 total addresses mapped, of which 7 were for newly identified testers. A bit of a low mapping month, mostly due to my attention being elsewhere. With the new mapping system hopefully this will become a little more streamlined for next year.
Until next time, happy Christmas testing
Cross-posted from the CPAN Testers Blog.
Back in 2008, it was obvious that the fragmentation of CPAN Testers sites was a problem. The system was slow, usually getting updated just once a day, and the presentation was a little disjointed. At that point a dedicated server was suggested, as this would bring a number of the key sites together and potentially provide a base with which to improve the updates of the sites. In addition it was seen a first step towards CPAN Testers 2.0.
In late September 2008 a proposal was put forward to the members of Birmingham Perl Mongers, to donate funds to aquiring a dedicate server to host a range of sites and databases. Unanimously they approved the proposal and a server was paid for at Hetzner Online AG. Based in Germany on a high bandwidth line, the server has enabled CPAN Testers to grow and now supports a dynamic set of sites and databases that are a consistent benefit to authors and users.
The server was covered for 1 year, with the intention of looking for a corporate sponsor to continue the funding for further years. However, due to the recent economic climate, the opportunities for funding appear to be limited. As such, recently another proposal was put to the members of Birmingham Perl Mongers, and once again they unanimously approved it. The server and hosting is now paid up for another year, and plans are afoot to further increase the family of sites and provide more resources to authors, testers and users.
Many thanks to all of Birmingham Perl Mongers for their continued support of the CPAN Testers project.
Cross-posted from the CPAN Testers Blog.
If you've not been following the CPAN Testers in the last month, you will likely have missed the updates to the CPAN Testers Statistics site. I would like to thanks MW487, JJ and Colin Newell for their thoughts and suggestions. The biggest changes have been around the matrices. The old matrices have been thrown away and a completely new set have been created, merging much of the data that was previously across the two old style matrices. The site now also looks at the OS system, rather than the specific version installed, which now gives a better general overview. In turn a new OS table is also available highlighting the number of tests per month are attributed to a particular OS. Unsurprisingly Linux is currently streets ahead of any other OS.
The graphs have always been of interest to those wishing to use them to promote Perl and CPAN, however, the way they are currently presented, doesn't always suit everyone, especially if they wish to change the style or take a different snapshot of the data. As such, you can now download the raw data files used to generate the graphs. All the files are in CSV format, so are easily loaded into you spreadsheet application of choice. Speaking of spreadsheets, in addition to changing the look of the matrices, you can now also download an XLS version of each matrix, as well as now having the ability to view each table in a widescreen format.
A new graph available is the Performance Graph, which shows how the CPAN Testers Reports Page Builder is performing each day, against the volume of reports submitted per day. While the majority of the time the Builder does perform well, every so often it slows due to the load on the web server, meaning it has to occasionally catch up, which can take several days. Now you can see whether any issues have caused your page to take a little longer to build, as well get a better idea of how many reports are getting submitted every day.
The most recent update has been the new dashboard on the homepage. Every so often I get asked how many CPAN distributions are on CPAN. Although the CPAN Statistics have had their own page for a while now, some have mentioned that it would be really cool to have a ticker that flips as a new upload gets added to CPAN. Although I can't do that just yet in true realtime, the new dashboard does try an emulate the rate at which reports and uploads have been submitted over the previous 24 hours.
In other CPAN related news the proposals and discussions for Meta-Spec 2.0 have now come to a close. David Golden is currently accepting patches to the approved proposals and hopefully we'll have a new draft specification available soon. It's been an interesting discussion in some cases, while others have been agreed or rejected almost without question. Some require a bit more thought, so it's likely there will be a further refinement of the spec in the future. If you want to read all the threads, visit the mailing list archives.
Last month we had a total of 171 tester addresses submitting reports. The mappings this month included 27 total addresses mapped, of which 14 were for newly identified testers.
Until next time, happy testing
Now that I've got the YAPC::NA and YAPC::Europe surveys done and dusted, I can now start looking at some of the changes I want to make to the survey system. Some I have already been planning, while others have been suggested by people who have taken the surveys.
Paint By Number Heart
The YAML file used to drive the surveys was originally designed to not reference any question numbers, with the code creating these on the fly to ensure no duplicates. To begin with this worked fine, but there are now such a variety of questions, including some that relate back to others, that knowing the question number ahead of time is useful. This is a fairly minor change, so shouldn't be too tricky to implement.
Take It To The Limit
I've had a few comments from people regarding the number of times they have attended a particular conference or workshop. Seeing as some have now been running for many years, if you've been to all of them, and can't remember how many that is, some have entered big numbers to imply they've been to all of them. As such, in future I'm going to add the feature that allows you to write 'ALL', as well as including the max number of occurances in the question label. Then if anyone does enter a rather large number or 'ALL' it'll get reset to the correct maximum.
Who Can It Be Now?
With the talk evaluations, a few speakers have asked whether I can tell them who wrote what in their evaluation results. Due to the nature of submitting everything anonymously I currently don't include any contact details with the responses. Having said that, there is no reason why anyone submitting an evaluation can't give their conscent to being identified, so that speakers can respond to the individual if they wish. The speakers concerned have had genuine reasons for wishing to contact individuals, so from next year there will be an extra tick box if you would like to be identified to the speaker. The default will still be anonymous submission, so it will be entirely the individual's choice as to whether they identify themselves or not.
Just Like You Imagined
I started writing a specification for the Survey YAML file during the Birmingham YAPC::Europe in 2006, but haven't kept it up to date with the changes that have been made in more recent times. I plan to complete this so that anyone reading the YAML files can make sense of them. It also might be useful for others to suggest new features.
I have the raw data from all the surveys since 2006 in SQL form. However, I want to make it more freely available and accessible, so others can analyse the data in different ways. Together with the raw data itself, this then needs the YAML file, the Survey specification and an example translation program to help understand how the data maps to the questions and what questions relate to each other. Essentially this just means me cleaning up the program I use to prepare the results and documenting everything.
Speak My Language
At the moment the survey site is written and presented in English. However, it's been long over due for the questions and text to be in a variety of languages. So although I don't speak other languages (at least not well enough to be competent in them), I'm hoping others will be willing to help out with some translations. Before I get to that stage though, I need to add support to allow for detecting and changing between languages.
Part of the reason for doing this is to allow YAPCs (and workshops) in other countries to take advantage of the surveys, if they wish to. At the moment I've only been working with the YAPC::Europe and YAPC::NA teams, as they have been the conferences I've attended, and are predominantly English language based. But these days I don't need to be there to run the surveys, and the Perl community pretty much covers the whole world, so why not accumulate knowledge from other events. At the very least, regular organising teams for workshops should then have the opportunity to get feedback to improve their events too.
Time Stand Still
It doesn't, but I often wish it would! A couple of people have commented that it takes such a long time between the end of the conference and the close of the surveys. This year I reduced it to 6 weeks, but it can then take an additional couple of weeks to process all the results, create graphs, prepare the website pages, write the documents for organisers and generate the emails for all the speakers. Much of it is automated now, but there are several tests and tweaks that need to happen to get it looking right. This will be improved further with the question numbering change, as I can fine tune the templates to particular questions.
As for improving the response rate, it often takes people some time to collect their thoughts. Typically within the first 2 weeks after the conference the bulk of responses are received. The last week sees another big increase in responses, when I send out the '1 week left' reminder. The weeks inbetween still have a steady trickle of responses, but it is slow. By way of example below are the number of responses received each day for the YAPC::Europe survey. The last 2 columns indicate the number of survey responses and the number of talk evaluation responses respectively:
Mon <-- evaluations opened
Tue, 4th August 2009 - 27
Wed, 5th August 2009 36 234 <-- last day of conference
Thu, 6th August 2009 34 119
Fri, 7th August 2009 31 101 <-- end of tutorials
Sat, 8th August 2009 6 55
Sun, 9th August 2009 14 47
Mon, 10th August 2009 17 28
Tue, 11th August 2009 9 58
Wed, 12th August 2009 4 34
Thu, 13th August 2009 3 13
Fri, 14th August 2009 3 18 <-- end of week 1
Sat, 15th August 2009 1 3
Sun, 16th August 2009 2 11
Mon, 17th August 2009 4 2
Tue, 18th August 2009 2 -
Wed, 19th August 2009 2 33
Thu, 20th August 2009 2 -
Fri <-- end of week 2
Mon, 24th August 2009 1 7
Tue, 25th August 2009 1 -
Fri <-- end of week 3
Sun, 30th August 2009 1 6
Mon, 31st August 2009 1 -
Wed, 2nd September 2009 1 -
Thu, 3rd September 2009 - 13
Fri <-- end of week 4
Thu, 10th September 2009 1 -
Fri, 11th September 2009 1 - <-- end of week 5
Tue, 15th September 2009 23 48 <-- reminder sent out
Wed, 16th September 2009 2 1
Thu, 17th September 2009 3
Fri, 18th September 2009 2 46 <-- end of week 6
Sat, 19th September 2009 2 8
Totals: 209 912
There is indeed a lull in the middle 3 weeks, so next year I do plan to try and reduce the availability of the surveys to at most 4 weeks. With the improvements to the evaluation and preparation code, it should then take about a week at most to review the data and then publish the results.
However, although my aim is to improve the turnaround of conference to results, there are a few other factors involved. All of my time taken to administer the surveys (before, during and after each conference) needs to accommodate my employer and family too. The conferences are typically around holiday season, so juggling everything to get the results done in a timely manner can be fun! Please bear this in mind if I don't fit into your personal schedule.
Release The Bats
It's always been my intention to release the code that runs the survey system as Open Source, however it means releasing two projects as Open Source, not one. The survey code is built on top of Labyrinth, a web management system I started developing in 2002 (2001 if you include the prototype). and is now used to run several CPAN Testers websites, as well as Birmingham.pm, GlousLUG and many other sites I run. Having a developer of 1 has meant it has taken a while for the code base to reach a stable state, which it has been now for a couple of years. It's not Catalyst or Jifty, although the core could potentially be abstracted and released as such. However, that's not what I'm going to release. Essentially I'll be releasing Labyrinth in two parts; the Perl library code and the supporting data files. On top of that I can then release the YAPC Survey code with the appropriate templates, plugins and additional data files.
Whether anyone then collaborates or not is another matter, but at least they'll have the chance to. People running smaller events will also have the ability to download, install and run the code themselves to administer their own surveys.
Pretending To See The Future
If you're an event organiser and think that the YAPC Survey system would be something you could use, please get in touch with any ideas you'd like to see featured. Also if you took any of the surveys this year, and think that they could be improved, whether it's simply rewording questions, adding questions, or some functionality that could be improved, please let me know.
I am very grateful for the support of all the conferences organisers, Eric Cholet for writing a new ACT API for me, and several speakers who have sent me personal thanks for supplying them with feedback to their talks, so it's obviously something that is being appreciated by the majority (though not all sadly). The split between the main survey and the talk/tutorial surveys seems to have worked well, and the majority of improvements so far have gone down well. There is still room for improvement, and hopefully the changes above will make for more feedback next year.
Cross-posted from the CPAN Testers Blog.
Last month was a fairly quiet month in terms of development, as projects such as the call for CPAN Meta Spec change proposals were opened on the QA Wiki, and the NA and Europe YAPC Conference Surveys were unveiled. However, there are some statistics that are planning to see the light of day soon, thanks to Tim Bunce putting together an updated Perl Myths talk for OSSBarCamp in Dublin last month. Expect to see them on the CPAN Statistics site some time during the month.
The CPAN Testers have been continuing to make headway through the uploaded modules, and I'm also pleased to say that the builder keeping the Reports sites up to date, has been managing the page requests very well this month, despite such a large volume of reports being submitted and continued interest in the site.
After all the news featured in the August Summary, it's not too surprising we've not had too much to report for September.
Last month we had a total of 161 testers submitting reports. The mappings this month included 22 total addresses mapped, of which 11 were for newly identified testers.
As I've mentioned previously, if you're planning to present a testing related talk at a forthcoming workshop or technical event, please let me know and I'll get it posted on here too.
That's all for this month's summary, so until the next post, happy testing
I've recently reworked an image creation script at work to use ImageMagick's own
convert script. The main reason being that text support is much better using
convert than calling ImageMagick through its Perl API. However, it threw up a rather confusing issue, that has taken a while to track down and resolve.
The command issued for a number of images is along the lines of the following:
convert -background "#ffffff" -fill "#000000" -font Helvetica -pointsize 14 -size 400x caption:"*" 9780596001735.png
The 'caption' is the piece of text we wish to have in the image. In most cases this is 1 or 2 short sentences, but in some cases it can include a single asterisk, as above. This has the confusing result of creating many files with the text in each a different file name as found in the current directory. It's perhaps not too confusing to realise that filename expansion has occurred. However, the asterisk is quoted, so shouldn't be expanded by bash. After a bit of investigation and various attempts to check quoting in the shell, I discovered that ImageMagick's own documentation has this to say regarding
In Unix shells, certain characters such as the asterisk (*) and question mark (?) automagically cause lists of filenames to be generated based on pattern matches. This feature is known as globbing. ImageMagick supports filename globbing for systems, such as Windows, that does not natively support it. For example, suppose you want to convert 1.jpg, 2.jpg, 3.jpg, 4.jpg, and 5.jpg in your current directory to a GIF animation. You can conveniently refer to all of the JPEG files with this command:
$magick> convert *.jpg images.gif
In the above command this would actually be expanded by the shell on Unix like systems, not
convert, as there is no quoting around the asterisk. However, ImageMagick in its desire to be "helpful" gleefully ignores any quoting and does the filename expansion (or globbing as they call it) regardless. Hence why in my original command several hundred files could be generated.
Understanding this, I then set about trying to pass the character in hex form, escaping with '\' and quoting in a variety of ways to just get the single asterisk written as the text, all to no avail. I hunted through the online documentation (both on ImageMagick's site and in other forums) to find a solution, but drew a blank. I had expected to find a command-line option such as '--no-globbing' or similar, that would suppress the filename expansion feature, but alas no.
Through a bit of trial and error I finally discovered the solution:
convert -background "#ffffff" -fill "#000000" -font Helvetica -pointsize 14 -size 400x caption:"* " 9780596001735.png
Can you see the difference? All it took was a magic(k?) space character after the asterisk for
convert to suppress the filename expansion!
Nowhere in the manual, docs and online help is there any mention of this. Perhaps I'm the first to encounter this, but I doubt it. As a way to help others who might also come across this frustration, I'm posting it here. I've submitted a bug report to the ImageMagick Wizards, so hopefully it may get considered for a future release. However, for now this looks to be the only way to get it working as intended.