Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

geoffrey (5895)

  (email not shown publicly)

Journal of geoffrey (5895)

Wednesday November 11, 2009
12:53 AM

Parrot Plumage: A Month Goes By

It's been over a month since my last entry. Life pretty much had me swamped for the last five weeks, but nevertheless I did get some Plumage work done.

It would be a daunting task to try to go back and remember all the details of the last five weeks, so I've decided to take a hint from pmichaud and just include my #parrotsketch reports (edited a bit here):

* Handle fetching over old working dir, including when changing repo types (as partcl did)
* Add rx() function for compiling regex strings, until nqp-rx is ready for use
* Likewise all_matches()
* Various small cleanups
* Fix gitoriousparser plugin for dalek-plugins
* darbelo++                 # help with cleanups
* dukeleto++                # plumage source tree reorg
* dukeleto++ and Tene++     # plumage test harness and test suite
* dukeleto++ and Infinoid++ # gitoriousparser work enabling a parrot-plumage dalek feed

* Dependency handling, including remembering installed projects (not the final paradise, but Good Enough For Now)
* Automatically sudo if parrot_bin directory not writable by user
* New 'projects' (list known projects) and 'showdeps' (show resolution for all dependencies) commands
* Plumage metadata dir can now be overridden by conf file (for testing support)
* Several new functions in Glue.pir and Util.nqp
* Lots more docs for Glue.pir and Util.nqp
* More tasks broken out of my head into TASKS
* dukeleto++ # Testing, testing, testing; factor out Util.nqp from main program
* darbelo++  # Matrixy metadata (and making it buildable against installed Parrot)

* Improve validation of metadata
* Refactoring and function documentation
* Much improved Makefile (with automatic Makefile rebuilding)
* import_proto.p6 (Import proto-managed projects into Plumage metadata)
* Analyzing discussion surrounding major CPAN META spec upgrade (which is in design phase)
* darbelo++ # Plumage's NQP configure brought to other projects
* Austin++  # Makefile education
* import_proto.p6 blocking on proto's installed-modules branch

* Talked at length with Plobsing++ re: current NCI problems
* Brain dumped to
* Converting Plumage to make use of new NQP-rx features
* Pushing the envelope of what NQP-rx has
* Exchanging feature requests with pmichaud++ via
* Moving Glue.pir functionality to Util.nqp where possible
* Further expanding Util.nqp to cover more common functionality
* Cleaning up and expanding Plumage's test suite
* More of everything in WIP section
* Several local Plumage branches blocked waiting for various NQP-rx features

So there you have it -- a month of Plumage work, in shorthand. I also finally got around to setting up Perl-specific microblogging accounts on Twitter and; I'm japhb on both of them just as I am in #parrot on As always, don't hesitate to drop by and ping me. If you'd like to join the Plumage effort, check out the code in the Parrot Plumage repository; read the README for the general overview, then come to #parrot to get your questions answered!

Monday October 05, 2009
03:25 AM

Parrot Plumage "Day" 8: Getting a bit smarter

When last we left off, Plumage had just managed to install its first project, thus moving from the "infant" to "toddler" phase of development.

In preparation for reaching "preschooler" status, I spent time this week removing some hackishness and making Plumage generally smarter about basic operations. Some of the changes were very simple, such as always printing out system commands to be spawned before actually executing them, or pushing some boilerplate code down into utility functions rather than forcing every calling site to go through the same contortions.

Some changes were infrastructural, including basic utilities like map(), exists(), and does() and workarounds for NQP limitations. The latter category included new functions such as as_array(), call_flattened(), and try(), which Tene++ and I added to Glue.pir to make calling convention mismatches a bit less painful.

Other changes were larger, such as adding command line option handling, allowing the user to ignore failures in various build stages, and handling exceptions thrown by Parrot ops and low-level Parrot libraries. Handling (possibly several) config files (including merging the config information across them) and moving all project builds to a separate location -- defaulting to ~/.parrot/plumage/build/ -- also fell into the "larger changes" pile.

Still others amounted to cleanup and general code maintenance, such as compiling Glue.pir to Glue.pbc during the plumage build so that all libraries could be directly loaded as pure bytecode at runtime, finding less hackish ways of working around NQP, and cleaning up documentation and comments throughout.

Finally, darbelo++ added a new project (partcl) to the metadata collection; I hope to add a few more in the next week or two.

All in all, no major new milestones reached -- but the code feels a bit more solid now, behavior is "more correct" in several places, and the user experience is definitely better. Not a bad week's work.

As always, you can check out the code at the Parrot Plumage repository, and don't hesitate to ping me on IRC -- I'm japhb on #parrot at If you'd like to join the effort, read the README for the general overview, then come to #parrot to get your questions answered!

Tuesday September 29, 2009
11:25 PM

Parrot Plumage Day 7: First Working Install

After posting for Day 6, Tene and I realized that the instructions section of the metadata spec needed to change -- the original "design" didn't survive actual implementation. While I made the necessary spec changes, Tene++ worked on several iterations of an HTTP fetcher in pure PIR.

Today we made heady progress. Two days ago, we only had basic fetching half-working. Today Plumage was able to coax its first project all the way from fetch through configure, build, test, and install!

darbelo++ wins the project prize as his decnum-dynpmcs project (support for decimal arithmetic in Parrot) was the first successful Plumage install. Close and Blizkost didn't quite make it -- Close has failing tests (which Plumage currently refuses to ignore), and Blizkost wouldn't build on my machine, though it's not clear if this was a problem with Plumage or with Blizkost itself.

There's definitely still a huge amount of work left to be done, but today marks a major milestone nonetheless. Plumage is doing a very important piece of the job it was designed for, and I'm feeling pretty proud of the progress so far.

As always, you can check out the code at the Parrot Plumage repository, and don't hesitate to ping me on IRC -- I'm japhb on #parrot at If you'd like to join the effort, read the README for the general overview, then come to #parrot to get your questions answered!

Sunday September 27, 2009
08:04 PM

Parrot Plumage Day 6: New committers!

"Day" 6 turned out to actually be hours here and there from most of the week. After Tene's initial Makefile patch initiated Day 5, Tene and darbelo began to pester me for details on the design and current capabilities. Once he realized it wasn't as far along as he needed, Tene switched his pestering to tasks that I could break out and hand off to parallelize the work a bit.

While I was thinking about that (and answering a blizzard of other questions from the #parrot crowd), I realized I needed to make the source tree more friendly for other committers. I added a CONTRIBUTING section to the README, moved the old probe tests (TimToady++ for that name) to their own directory, and made a couple other minor changes.

With that, I added Tene++ and darbelo++ as the first new committers to the parrot-plumage repository. W00t!

It didn't take long before darbelo noticed a licensing issue. After some discussion, the members of #parrot decided he was probably correct. I was assigning the copyright for Parrot Plumage to the Parrot Foundation, but not requiring new committers to have signed Contributor Licensing Agreements ("CLAs") sent in to PaFo. It happened that Tene and darbelo were both Parrot committers, which meant they already had signed CLAs -- so I simply changed the README to reflect the CLA requirement, and we neatly escaped the problem. (Note that for now we don't require Parrot Plumage committers to be full Parrot committers, merely to have a signed CLA sent in.)

Once I finally got some mental space, I went back to tanking on the tasks that I could hand off to other developers, and codified the first few into the TASKS file. darbelo++ and then Tene++ began picking off tasks and completing them over the next couple days, while I refocused on my day job.

Tene's work culminated in adding the fetch command, with the ability to handle both Subversion and Git repositories. This worked like a charm with our sample Git-based project, Blizkost. Unfortunately, we discovered our first hiccup when our sample Subversion-based project, Close, would only partially fetch -- it appears to have submodules that require special authentication. Ah well, no rest for the wicked.

With that problem added to TASKS, and the previous commits reviewed and lightly edited, "Day" 6 drew to a close.

As always, you can check out the code at the Parrot Plumage repository, and don't hesitate to ping me on IRC -- I'm japhb on #parrot at If you'd like to join the effort, read the README for the general overview, then come to #parrot to get your questions answered!

Tuesday September 22, 2009
02:03 AM

Parrot Plumage Day 5: Configure.nqp and a 'proper' Makefile

Shortly after Day 4 day job deadlines loomed, and I expected Day 5 to be delayed at least a week. Thanks to some late nights this morning's status meeting at the day job went well, so Tene++'s ping on #parrot caught me with a few spare tuits and the strong desire to take a break from the tasks of the last few days.

Tene had pasted a patch to the Makefile for Plumage, essentially suggesting that it be a bit less hard-coded and instead use the parrot_config binary to fill in some of the variables. Unfortunately, his patch used backticks (`...`) to capture the output of parrot_config, which is not portable across make implementations. In fact, it appears that every make does this differently -- GNU Make uses the $(shell ...) construction, BSD Make uses the special != assignment operator, and so on.

Asking around, it soon became clear that this morass of incompatible syntax for capturing shell output is one of the reasons everybody just uses a Configure script of some sort. In fact, a simple Configure script does nothing more than replace markers in a makefile template (typically called with the proper text, and write out the completed Makefile to be fed to make.

That sounded like exactly the kind of simple substitution that I needed for the Plumage Makefile, but I couldn't just use one of the zillion existing Configure scripts, because it needs to be written in NQP for much the same reasons that Plumage itself does.

Well, I clearly couldn't leave that situation lie, so a couple hours later, Configure.nqp was born. After implementing slurp() and spew() functions in the Glue.pir "builtins" library, reading the template and writing the output again were trivial, but the interesting task turned out to be implementing subst() to do the text substitutions.

Parrot has for quite some time now shipped with PGE, the Parrot Grammar Engine. Among other things, PGE has built-in support for the Perl 5 and Perl 6 grammar/regex languages. With some gracious help from pmichaud++ and a good bit of spelunking in the Rakudo src/builtins/ directory, I pieced together the necessary bits to instantiate the Perl 6 variant, use it to compile the regex argument to subst(), and iterate over the matches performing the substitutions.

At first I implemented purely static string replacement; the same new string would be used to replace every match point in the original. This method would however be horribly slow for the task at hand -- I'd have to run the substitution across the entire makefile text for each of the hundreds of config values that parrot_config knows about.

Taking a hint again from the Rakudo implementation of subst(), I also made it possible to supply a sub instead of a plain string as the replacement argument. This sub gets called with each match object in turn, and returns the appropriate string to use for the replacement. This makes the relevant code from Configure.nqp remarkably clean:

my $replaced := subst($unconfigured, '\@<ident>\@', replacement);

sub replacement ($match) {
    my $key    := $match<ident>;
    my $config := %VM<config>{$key} || '';

    return $config;

(Yes, replacement() could easily be written as a one-liner. It seemed a bit clearer this way.)

The plain string form of subst() still gets used, to fix the slash orientation for Windows systems:

if ($OS eq 'MSWin32') {
    $replaced := subst($replaced, '/', '\\');

A few miscellaneous cleanups and an update to README later, and the Makefile cleanup task (which I expected to remain near the bottom of the task list for a fair bit, nibbling at the edge of my conciousness) is Just Done. That's a good feeling.

As always, you can check out the code at the Parrot Plumage repository, and don't hesitate to ping me on IRC -- I'm japhb on #parrot at

Monday September 14, 2009
02:51 AM

Parrot Plumage Day 4: First Bones of the Skeleton

Day 1 and Day 2/3 got me (mostly) prepared to begin the actual coding for the Parrot Plumage prototype. There were just two things left to figure out: how to access command-line arguments, and how to initialize complex data structures.

The first problem turned out to be relatively simple to solve with a couple more additions to the glue library to initialize @ARGS and $PROGRAM_NAME from the IGLOBALS_ARGV_LIST field of the Parrot interpreter globals.

The second problem (initializing complex data structures) required a couple silly hacks, but I got a decent workaround in the end. Allow me to explain ....

NQP natively can index into hashes, arrays, and complex arrangements of same with relative ease. Want to access an entry in a hash of arrays of hashes, with some fixed keys and some varying keys? No problem. Something like this will do:

$age := %pets{$type}[$num]<age>;

The problem is, there's no short and simple way to initialize the %pets hash. NQP doesn't have syntax for hash or array literals, which makes it rather painful to instantiate complex structures. Oh, I suppose you could write a whole bunch of these:

%pets<feline>[0]<age> := 5;
%pets<feline>[1]<age> := 6;
%pets<canine>[0]<age> := 4;

but as you can imagine that gets old rather quickly -- and doing the initialization in raw PIR would be even more painful. Luckily, during my previous hack day I'd brought the JSON parser up to current standards, so I used it for a little trick. I defined the data structure I wanted as a JSON string, parsed it into a real data structure using the data_json language that now ships with Parrot, and then ran a fixup routine on the few bits that didn't translate easily.

That last bit might require a little explanation. I could only create data structures this way containing data types the JSON language knows how to represent: numbers, strings, arrays, hashes, booleans, etc. However, there's no way to represent a code reference, and one of the applications I had in mind was a function table for mapping user commands, such as version and install to the subroutines that would implement them.

My solution was to replace the code references with the names of the subroutines I wanted in the JSON text, and then after parsing the JSON run a peculiar bit of inline PIR to replace the names with the real code references. The fixup function in question looks like this:

sub fixup_commands ($commands) {
        $P0 = find_lex '$commands'
        $P1 = iter $P0
        unless $P1 goto fixup_loop_end
        $S0 = shift $P1
        $P2 = $P1[$S0]
        $S1 = $P2['action']
        $P3 = get_hll_global $S1
        $P2['action'] = $P3
        goto fixup_loop

    return $commands;

The short explanation is that the loop in the above PIR block iterates over each entry in the $commands hash, and for each action key replaces the matching string value (the subroutine name) with the subroutine itself, looked up using the get_hll_global op. And it works!

These problems out of the way, I could finally start writing a prototype of the plumage command line tool. Time eventually ran out on my hacking day, but I managed to implement basic versions of usage, version, and info commands, and a command parser and dispatcher that could be charitably described as "beyond minimalist".

While the first two commands just provide info about plumage itself, the last one is the first command that actually provides real functionality. In particular, info simply loads the metadata for a given project and prints the details for the user.

In order to have some metadata to display with my brand new info command, I took a shot at filling in the fields from the Metadata Proposal manually for a single project. In this case, I chose Blizkost because it's useful, cool, and non-trivial without being insane.

Unsurprisingly, I came across a couple underspecified bits in the metadata proposal, but things mostly seemed to make sense. The result is a tad wordy, but it is after all intended to be generated mostly automatically.

That done, I found myself out of time for this hack day, but things are looking up for implementing an ultra-simple version of the install command next time. Until then, I invite you to browse the repository or just come by #parrot on and ping me (japhb). See you there!

Tuesday September 08, 2009
12:16 PM

Parrot Plumage Day 2/3: On the Shaving of Yaks

After spending Day 1 mostly exploring the boundaries of NQP, I was hoping to put the pedal to the metal and start the Parrot Plumage implementation in earnest. I ended up with more bald yaks instead.

During the first day I discovered that I needed two features added to NQP to make progress on the ecosystem tools: the ability to do cross-language eval (a prime raison d'être for Parrot), and the ability to declare object attributes directly for proper OO (NQP having made do so far with implicit or PIR-coded attribute definitions). Attribute declaration was not ready yet, but Tene++ had produced a (surprisingly simple) implementation of cross-language eval, so I decided to push on in that direction.

The first thing I wanted to do was parse JSON data using the existing JSON parser that ships with Parrot. Unfortunately, it had been some time since the JSON parser had been updated, and it was still conforming to an older compiler API. At the raw PIR level, the old API looks like this:

.local pmc json, data
load_bytecode  'compilers/json/JSON.pbc'
json = compreg 'JSON'
data = json(text)

The old API is still fully functional for doing work with just one language (plus PIR, which is always available), but it doesn't support working with multiple high-level languages in the same program. Thankfully, the new API involves only minor changes:

.local pmc json, code, data
load_language  'data_json'
json = compreg 'data_json'
code = json.'compile'(text)
data = code()

Essentially, the new API makes just two changes. First, the compiler is loaded using the load_language op, rather than the more generic load_bytecode op. Second, rather than the compiler being a simple subroutine called directly on the source text to produce a final result, a compiler is now an object with a compile method that converts source text into a subroutine representing the "program". Since JSON is a non-executable language, this subroutine merely creates and returns the data structure representing the JSON text -- so the last step is to call the subroutine to get that data structure.

Updating the JSON parser to the new API would have been relatively simple, but for one problem -- the new API requires the compiler have a lowercase name. Those of us with some experience dealing with cross-platform development will immediately blanch upon discovering such a requirement and realizing (as in this case) that the existing JSON compiler not only used an uppercase name internally, but some of the files that implemented it were uppercased in source control, while others weren't. Oops.

After some discussion on #parrot, we decided the least havok for our users meant copying the compiler to a new name and deprecating the old one. Since we needed a new name, and "compilers" for data-only languages are somewhat special, we came up with the informal convention of prefixing the data format's name, in lowercase, with data_. Thus data_json was born.

Some hacking later, I discovered a few namespacing issues in the original JSON compiler; fixing those allowed data_json to finally work from high-level languages such as NQP, as well as from multiple namespaces in PIR. Leaving some final details (such as converting any existing tests) for another day -- or another enterprising coder, hint hint -- I went on to the next task.

Using the ecosystem tools should be as easy as possible for the user. Rather than this:

parrot /path/to/NQP/compiler/nqp.pbc plumage.nqp install foo

I'd much rather have this:

plumage install foo

Thus the next task was to figure out how to produce a proper executable from the plumage.nqp source. Some generous cargo culting, a (sortof) proper Makefile, and a judicious hack later, I could produce a working plumage executable for a trivial bit of NQP.

Sadly, by that point, my available time for the hacking session had run out, so we'll see what the next session brings. If you'd like to help, discuss interactions with other projects, or even just ask questions, come by #parrot on and ping me (japhb). See you there!

Monday August 24, 2009
01:30 AM

Parrot Plumage Day 1: Much Ado About NQP

The time has clearly come to start work on Parrot's module ecosystem -- the module installer, search tool, dependency checker, and so on. We've been discussing various pieces on #parrot, #perl6, and parrot-dev for a few weeks, and a couple weeks ago we reached rough consensus. At that point, I collected all the comments and emails and wrote up a draft design document. I guess you could call that 'Day 0'.

Today I managed to commit a few hours to getting started actually implementing some of that design. It turned out that didn't happen, but I did manage to set up a project repository for the prototype and spent the rest of the time exploring NQP.

NQP stands for "Not Quite Perl 6", a (considerably) simplified variant of Perl 6 that lends itself well to optimization and clean implementation. It is one of the standard languages shipped with Parrot and generally used as a tool for creating more complex languages (such as, unsurprisingly, the Rakudo implementation of Perl 6).

Because it is shipped with Parrot, we can be confident that it will always be there, no matter what other languages the user may or may not have installed. And because it is a subset of Perl 6, it's easy to use (though a tad wordy in a few places that full Perl 6 provides heavier syntactic sugar). Thus I decided to implement the module tools using NQP.

Unfortunately, NQP is a little too simplified right now. It includes all the necessary features to implement parser actions for a compiler (the most common use for it), but this will be the first project using NQP to write fully independent command line tools. These tools are even more demanding in that they will need to work pretty extensively with platform dependencies -- from system configuration to process spawning.

Because there isn't any existing NQP code that does the kinds of things this project will need, I spent most of today exploring the boundaries of what existed in NQP already, what was missing, and how much could be filled in by borrowing magic from Rakudo. At this point, I can pull in environment variables, Parrot configuration, and OS information; and I can shell out to external commands, optionally capturing the output. That all went fairly smoothly, if slowly.

Unfortunately, I had more problems with writing OO code in NQP. It turns out that most of the OO code written in NQP creates classes implicitly or through hand-coded PIR. Even though there are class and method declarators, there appears to be no way in pure NQP to declare object attributes -- nor an obvious way to fake them using hand-coded new, BUILD, or bless.

Of course, I can always fall back to hand-coded PIR, even faking my own declarators, but there was some agreement that this is something NQP really should provide out of the box; I'll be talking with pmichaud++ about this in the coming days.

The project goal is to have at least the basic toolchain implemented and working smoothly for Parrot 2.0, in keeping with the official 2.0 vision: "Production Users". In order to have things running smoothly by then we need to get the prototype up and running soon, so that we can iterate several times before Parrot 2.0 is released. If you'd like to help, come by #parrot on and ping me (japhb). See you there!

Tuesday May 17, 2005
09:23 PM

Limits of automated testing

Recently I was doing debugging-via-email of some graphics code and it turned out that the problem became perfectly obvious with a single screenshot of the demo scene. I realized that I have a habit of designing test scenes for my graphics code that make rendering errors pop out to the human eye -- most any problem becomes obvious in a second or two. This is most definitely a Good Thing.

Of course, many readers will point out that I should have a strong automated test library to catch these sorts of issues; the user should just be able to run make test and get a nice test failure describing the problem. That's great advice, and I'd love to give the users a nice automated test suite . . . but how?

The problem here is that OpenGL is guaranteed to produce consistent output from run to run, but only for the same hardware, drivers, resolution, color depth, and so on. Change any of these, and the output is allowed to change quite drastically. Yes, there are some bounds to how far afield the output can stray -- but these bounds are, to put it lightly, pretty loose.

So how do you automate the testing, when OpenGL drivers are often buggy and sometimes vary widely from vendor to vendor and system to system? You can't keep canonical images in the testing library to compare against because the output from the developer's system is unlikely to match pixel-by-pixel on basically anyone else's system. You can't even save the output from the first test run, and compare against it in succeeding builds; if the user happens to upgrade their video drivers, or alter the desktop resolution or color depth, or make some other completely reasonable change to their system, the pixels are quite likely to be different.

I could spend a great deal of effort writing a fuzzy image matcher that determined whether two images matched within the available OpenGL guarantees -- and submit the result as a dissertation on computer vision. It's certainly an interesting subject for research, and I wouldn't be surprised to hear that SGI did exactly this to create their conformance testing library, though I've never seen it -- but it has little to do with the graphics work in front of me. I'd rather spend my time actually producing working rendering code.

This brings me back to my original method -- design test scenes that can take advantage of the human visual system by making real rendering errors pop out of the noise of absolute pixel value differences. An incorrect pixel here or there would be difficult for the eye to detect (unless the color was WAY off), but I'm looking more for "Well, clearly something's wrong with texturing -- the big wooden box is rendered as a plain black cube."

Yes, a human needs to get involved. But with proper test design any problems should be obvious -- so obvious that the user may react even before being able to verbalize the issue, because reacting quickly to seeing something wrong is a survival trait. In many cases, this process can actually be faster than running a full automated test suite for a complex system. Granted, it won't help autobuilders with automated test runs on the latest source revision, but it does have its place.

None of this is specific to graphics code, of course; that's just the area I've played the most with. These testing issues will come up anywhere you need to test interactions with the real world, where APIs often don't guarantee perfect reproduction on all systems, in all environments. How do you know, for instance, that the user can actually hear the music you are trying to play?

In the last few years I've come across quite a few books and articles espousing automated testing, and going into great detail about how to write automated tests and test harnesses in every development environment and programming language under the sun. What seems to be missing is a decent treatise on producing good user-oriented tests. Maybe I'm just looking in the wrong places . . . .

Wednesday May 11, 2005
03:21 PM

Win32 SDL_perl 1.20.3 released

Wayne Keenan is back working on the SDL_perl port for Win32, and I am hosting the binary builds for him. This release brings the Win32 port up to 1.20.3 (matching most of the free *nixen), which means all of the examples from my Building a 3D Engine in Perl articles should now work perfectly.

Full PIGGE support and other fixes beyond 1.20.3 are in progress.