Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Alias (5735)

  (email not shown publicly)

Journal of Alias (5735)

Wednesday July 21, 2010
09:27 PM

Profiling your website while live on a production cluster

As I mentioned in my last post, by day I wrangle a large web application that occasionally verges on being too complex to for mere humans to understand.

Curiously, because it is private and the income rate is high (we probably average something like $5 in gross turnover per page view) we don't have to deal with a lot of servers to deliver it, by internet standards anyway.

But the application is still way too big to profile easily by normal methods, and certainly on production it's way too heavy, even if we applied targeted profiling using something like Aspect::Library::NYTProf.

Between the web servers, transaction server, database, search engine and cache server, we are probably only dealing with 12 servers and 30 CPUs. Of course, these servers are all horribly expensive, because they are all server-virtualised, network-virtualised, doubly redundant (high-availability + disaster-recovery) and heavily monitored with high end support contracts.

One of our most sensitive issues is database load.

We have a ton of database tables (about 200) and lots of medium sized queries running across them. One of our main failure modes is that some deep change to code boosts the quantity of some significant query, which stresses the database enough to cause contention and lock-storms, leading to site failure.

Complicating things, big parts of some pages are embedded in other pages. So attributing load and lag to one part of the website, or to Oracle, is tricky and hard to benchmark in advance (although we do load test the main load paths to catch the worst cases).

For a long time, we've had a mechanism for zoned profiling the production site, so we can allocate wallclock costs to different general areas of the site.

But it is fragile and unreliable, requiring perfect maintenance and every developer to remember to write this kind of thing everywhere.

# Embed a foo widget in the current page

Since you don't know this profiling system exists unless you've seen it somewhere else in the code before, and it's hard to care about something that is orthogonal to the problem you are actuall solving, this mechanism has degraded over time. While we still get some pretty cacti graphs showing load breakdown, they are highly unreliable and you can never be entirely sure if the load attribution is correct.

This kind of performance monitoring as a "cross-cutting concern" is a textbook use case for Aspect-Oriented Programming, and so in our Christmas 2009 code freeze window I set about trying to rewrite the push/pop_timing profiler using

Unfortunately, turned out to be a bit too slow and naive for my needs. But after a 6 month delay to do a 90% rewrite of, I now finally have something sophisticated enough (and fast enough) to meet my needs.

So today I'm releasing the shiny new Aspect::Library::ZoneTimer, which will serve as the main plank of our new production profiling system.

The idea behind ZoneTimer is to define each performance zone as a pointcut. The aspect will track the movement of your code across each zone boundaries and build a running total of the exclusive time spent in each performance zone.

When your program exits the top-most performance zone, the totals will be sent to a handler function.

A typical use of the ZoneTimer aspect looks something like this

use Aspect;
aspect ZoneTimer => (
    zones => {
        main      => call 'MyProgram::main' & highest,
        search    => call 'MyProgram::search',
        widgets   => call qr/^MyProgram::widget_.*/,
        templates => call 'MyProgram::render',
        dbi       => call qr/^DB[DI]::.*?\b(?:prepare|execute|fetch.*)$/,
        system    => (
            call qr/^IPC::System::Simple::(?:run|runx|capture)/
            call 'IPC::Run3::run3'
            call qr/^Capture::Tiny::(?:capture|tee).*/
    handler => sub {
        my $top       = shift; # "main"
        my $start     = shift; # [ 1279763446, 796875 ]
        my $stop      = shift; # [ 1279763473, 163153 ]
        my $exclusive = shift; # { main => 23123412, dbi => 3231231 }
        print "Profiling from zone $top\n";
        print "Started recording at " . scalar(localtime $start) . "\n";
        print "Stopped recording at " . scalar(localtime $stop)  . "\n";
        foreach my $zone ( sort keys %$exclusive ) {
            print "Spent $exclusive->{$zone} microseconds in zone $zone\n";

This example breaks out the cost of a typical small web application into a general zone, a special zone for the search page, and then splits the costs of generating widgets, rendering the HTML template, waiting for the database, and making calls to the underlying system.

Start and stop times are returned as a two element array exactly as returned by C, and the exclusive zone totals are returned as integer microseconds (all math is done in these integer microseconds to prevent floating point corruption).

The use of Aspect allows us to easily mix special cases via the pointcuts, such as the use of "highest" which makes sure that "main" only matches the first time it is seen, and any widget which that does a re-entry into main still has that cost attributed to the widget. We've also hooked into multiple system call modules to measure system call cost, because we know different modules our program consumes will use different methods for interacting with the system.

While the handler I've shown here will just print out a summary of the call, in our environment at work the profile report handler will format the exclusive times into a special log message and then send it via the super-quick and non-blocking Log::Syslog::Fast module to the UDP localhost syslog service, where it is mirrored to disk for debugging use, and then forwarded on to our main company-wide syslog server.

On our setup, we can then use an excellent commercial log analysis product called Splunk (limited free version also available if you want to try it out) to do our tracking and trending of the performance for the entire application across the entire cluster.

The nicest thing about the Aspect system is that it scales the performance cost and complexity risk directly to the complexity of the use case you are using it for.

If it turns out that the Aspect hooks are too intrusive and causing a performance drain, or accidentally causing some weird edge case behaviour, you can simply turn off that the Aspect and restart the servers and the performance penalty just vanishes.

Maintenance of the profiler zones is really easy, because they are all listed clearly in one place.

Depending on your use case, you could even define the performance zones in your website configuration, and then adjust the profiler zone boundaries directly on production (although I'm not sure I'd want to do that in our specific case, since we're fairly paranoid about untested code going into production).

This ZoneTimer is just the first of several Aspect-based tools I plan to release over the next few months. If you get a chance to try out any of these new toys, I'd love to hear feedback on how well they work for you.

In addition to the value of the ZoneTimer aspect itself, one other thing that some people might find of interest is just how little code it took to get this all working.

The entire Aspect::Library::ZoneTimer was implemented in about 80 lines of code, which you can see at y/

This is small enough that you could just clone the module and tweak it for your own use case, if you wanted to creat a new and tricky customised profiler.

Monday July 19, 2010
09:25 PM

How we deploy massive Perl applications at work

Every now and then, we hear people talking about mechanisms for doing Perl in a commercial environment and how they deal with packaging and dependencies.

This is mine.

At Corporate Express, our main Perl application is a 250,000 line non-public monster of a website that has over 100,000 physical users and turns over about a billion dollars. It implements huge amounts of complex business functionality, and has layer upon layer of security and reliability functions in it because we supply to multinationals, governments and the military (only the stuff that doesn't blow up of course). Our .pm file count is around 750, and our test suite runs for about 4 hours (and is only around one third complete).

Lest you suspect that 200,000 lines is wasted in re-implementing stuff, the main Build.PL script has around 110 DIRECT dependencies, and somewhere in the 300-500 range of recursive dependencies. Loading the main codebase into memory takes around 150-200meg of RAM.

When I joined the team, the build system was horribly out of date. The application was stuck on an old version of RedHat due to go out of support, and as a Tier 1 application we are absolutely forbidden from using unsupported platform.

So I took on the task of upgrading both the operating system and the build system for the project. And it's a build system with a history.

Once upon a time, long ago, the project went through a period where the development team was exceptionally strong and high skilled. And so of course, they created a roll-your-own build system called "VBuild".

They built their own Perl, and along with it they also built their own Apache, mod_perl, and half a dozen other things needed by the project. This is similar to many suggestions I hear from high-skilled people today, that at a certain point it's better just to build your own Perl. VBuild this created in the pre-commercial Linux era, so it's not an entirely unreasonable decision for that time period.

Unfortunately, a few years later the quality of the team dropped off and VBuild turned into a maintenance nightmare because it required a high-skill person to maintain it.

At the time, the Tier 1 "Must be supported" policy was coming into effect, and after the problems with custom-compiling they decided to go with the completely opposite approach of using only vendor provided packages, in a system called "UnVBuild".

Since their platform of choice was RedHat, this had become troublesome even before I arrived. Worse, in the change from RHEL 4 to RHEL 5, some of the vendor packages for things like XSLT support were dropped entirely leaving us in a bind.

My first instinct was to return to the build everything approach, but the stories (and commit commentary) from that time period reinforced the idea that complete custom build was a bad idea. Office supplies is hardly a sexy industry, and the ability to entice good developers into it is a quite legitimate risk.

So in the end, I went with an approach we ended up nicknaming "HalfBuild". The concept behind it is "Vendor where possible, build where needed".

We use a fairly reasonable chunk of vendor packages under this model. Perl itself, the Oracle client, XML::LibXML and a variety of other things where our version needs are modest and RHEL5 contains it. We also use a ton of C libraries from RHEL5 that are consumed by the CPAN modules, like all the image libraries needed by Imager, some PDF and Postscript libraries, and so on.

One RPM "platform-deps" meta-package defines the full list of these system dependencies, and that RPM is maintained exclusively by server operations so that we as developers are cryptographically unable to add major non-Perl dependencies without consulting them first.

On top of this is one enormous "platform-cpan" RPM package built by the dev team that contains every single CPAN dependency needed by all of our Perl projects.

This package lives in its own home at /opt/cpan and is built using a process rather similar to the core parts of Strawberry Perl. With PREFIX pointing into /opt/cpan, we first build a hand-crafted list of distribution tarballs to upgrade RHEL5 to a modern install of (without overwriting any normal system files).

We then boot up the CPAN client from /opt/cpan, and tell it to install all the rest of the dependencies from a private internal CPAN mirror which is rsynced by hand specifically for each new CPAN build. This ensures that the build process is deterministic, and that we can fix bugs in the build process if we need to without being forced to upgrade the modules themselves).

The CPAN client grinds away installing for an hour, and then we're left with our "CPAN Layer", which we can include in our application with some simple changes to @INC at the beginning of our bootstrapping module.

The /opt/cpan directory for our project currently weighs in at about 110meg and contains 2,335 .pm files.

Updating /opt/cpan is still something of an exercise even under this model because of potential upgrade problems (Moose forbidding +attributes in roles hit us on our most recent upgrade) but the whole process is fully automated from end to end, and can be maintained by a medium-skill Perl hacker, rather than needing an uberhacker.

Over the last 2 years, we've upgraded it around once every 6 months and usually because we needed to add five or ten more dependencies. We tend to add these new dependencies as early as we can, when work that needs them is confirmed but unscheduled.

We also resort to the occasional hand-copied or inlined pure-Perl .pm file in emergencies, but this is temporary and we only do so about once a year when caught unprepared (most recently Text::Unidecode for some "emergency ascii'fication" of data where Unicode had accidentally slipped in).

While not ideal, we've been quite happy with the /opt/cpan approach so far.

It means we only have to maintain 5 RPM packages rather than 500, and updating it takes one or two man-days per 6 months, if there aren't any API changes in the upgrade.

And most importantly it provides us with much better bus sensitivity, which is hugely important in applications with working lives measured in decades.

Saturday July 10, 2010
11:50 AM

Padre Second Birthday Party 24th-25th of July

2 years ago this month, Gabor did the first Padre release.

The last 12 months has seen Padre mature from a high-end text editor to a low-end refactoring IDE. We've stolen a number of features from Ultraedit, Komodo and EPIC, and we've invented new features all of our own, making Padre a very fluid and natural place to write Perl in.

We've added support for Perl 6, Template Toolkit, remote file support, more languages, syntax checking, an interactive debugger, a regex editor, and our first half a dozen refactoring tools.

We've also greatly solidified the code. Window integration is now totally solid, we've added a resource locking API, a new filesystem API, a new search API, a new display API, rewritten the threading and background Task subsystem, heavily overhauled the Plugin Manager API and GUI, and added Advanced Preferences and the ability for advanced users to selective disable various Padre bloat/features.

The last couple of months have also seen great improvements in Padre's hackability as well. The new Task 2.0 API lets people write background logic and consume multiple cores of CPU without having to know how threading works, and the new wxFormBuilder plugin lets you build GUI code without having to know Wx (one of the biggest barriers to contributing to Padre).

On the weekend of the 24th-25th of July we would like to invite all Padre developers, users, friends and well-wishers to join us for Padre's Second Birthday Party and Hackathon in the Padre IRC channel at irc:// or via the Mibbit Web Client.

If you've always been curious about, or interested in hacking on, Padre we'll have a number of developers in channel to help you out.

Personally, I plan to debut the first public release of Padre::Plugin::FormBuilder, and to start ripping out all Padre's older fixed-size dialogs and replacing them with new shiny model-generated sizer-based dialogs that will work much better across all three operating systems.

If you'd like to help out in this effort, I'll be in channel most of the day on both days (Sydney timezone).

I look forward to seeing you all there.

Monday July 05, 2010
11:10 PM

Yahoo provides (awesome) alternative Geocoder to Google's

As far as I'm concerned, there are three critical things you need in a Geocoder service.

1. Global Coverage

Because when it doesn't have global coverage, it's basically means "America-only" and thus pointless for most of the world.

2. Multiple Matching

For ordinary humans, there's massive power in being able to just search for "1 Oxford Street" without listing any more details. From there, if it is a country-specific application you just change it to "1 Oxford Street, Australia" behind the scenes.

That results in a list of possible locations.

You show the results for the first result, and then a list of "Did you mean:" links for the other results. This lets the application do what people mean most of the time, while making it trivial to recover if it isn't accurate enough.

You can see the effect I'm talking about by running the example query here...

3. No usage limitations

Google Maps Geocoder can only be used with Google Maps. Fail.

I'm willing to accept a volume limitation like "You have to pay after the first 50,000 requests" but I can't accept a usage limitation.

I'm happy to report that Yahoo's new PlaceFinder service is the first free Geocoder that meets all these primary criteria.

It's global in scope, lets you control the list of results with paging, and doesn't limit the usage to any particular domain.

Now all we need is an Geo::Coder plugin for it.

Please write one for me Lazyweb :)

Wednesday June 30, 2010
10:32 PM

The next challenge for Perl on Windows (et al)

Even at this early stage, before any actual installers have appeared, things are looking pretty good for Strawberry Professional Alpha 2.

Padre is starting to firm up as a usable editor, Frozen Bubble works quite well and is completely playable, and I think my wxFormBuilder work is coming along nicely (which bodes well for quickly and easily creating GUI apps in some Alpha 3...)

My biggest remaining concern at the moment though, is also one of the smallest and seemingly trivial issues.

Although we can much more easily build and install large, rich and good looking desktop applications there is still no way to launch these applications without resorting to the command line.

Any iconography that are filled into the start menu in Strawberry Professional will do so because Curtis has put them there himself.

A Perl installation, via the %Config information, lets an installer know where to put libraries, binaries, documentation etc. Shared files in the File::ShareDir model are really just a hack, putting them in an agreed location within lib.

File::HomeDir isn't any use to us either for this. It is designed to let programs deal with user-owned data at run-time, NOT with system integration at install time.

Without the ability to install programs in a way that desktop users can easily launch, our little nascent desktop revolution will never really be able to get up to speed.

Having a post-install step where you need to launch a Windows command line, and then run "padre --desktop" just to install a desktop icon is simply not good enough.

Likewise, having to run Frozen Bubble from the command line is silly as well.

So consider this a challenge to anyone out there that likes tackling tricky puzzles, to try and build a File::Launcher (or whatever you want to call it) that can locate install paths, and be integrated into module installers, so we can make proper use of the Start Menu, Desktop, Quick Launcher, and all the equivalent features on all three major desktop platforms (Windows, Mac, and

If you build it, Padre will use it.

Tuesday June 15, 2010
07:41 AM

I hereby report the success of my 2005 Perl Foundation Grant

In late 2005 I was awarded a grant from the Perl Foundation for "Extending PPI Towards a Refactoring Perl Editor". You can see the original grant here. erl_editor

My idea at the time was to take the recently completed PPI parser, fill in a variety of different supporting features, build some modules to improve platform integration, and then build a refactoring editor that used PPI and that would fire the imagination of what was possible in Perl.

A year and a half later in early 2007, I was forced to report the failure of the grant. ant

Not having anything like the time or experience to build an editor from scratch, I had intended to build a thin refactoring layer over the top of an existing Perl editor. For various reasons, I couldn't make this workable.

To my great surprise and astonishment, the Perl Foundation decided to pay out the grant anyway. Their reasoning was that I had indeed completed all the secondary goals, and the benefits provided by the creation of Vanilla Perl (later to become Strawberry Perl) which was NOT in the grant made the grant worthwhile anyway.

As Curtis noted at the time "Given that he didn't complete his grant, some might be surprised that we've decided to make the final payment...".

And indeed, from time to time the subject has come up here and there about getting paid despite not succeeding. Not often, and not loudly, but occasionally and ongoing.

It has also eaten at me. I hate not delivering on what I say I'll do, and I wanted the refactoring editor for me as much as for everyone else. While Padre has grown and improved, it's never quite met my expectations of what a great Perl IDE should be like.

So it is with great joy that with the landing of the Task 2.0 API I think Padre finally meets the criteria that I set for myself in taking the grant, a Perl Refactoring IDE that would let me give up Ultraedit and present Perl in a modern GUI setting.

Task 2.0 makes Find In Files, Replace In Files, and Incremental Search only a small amount of coding away from being created. These are the last features I need to finally ditch Ultraedit once and for all. The time has arrived, I think, to draw a line in the sand.

And so almost 5 years after the original grant, it is with great joy that I would like to formally report I have completed my 2005 Perl Foundation grant.

I have built, on top of the shoulders of Gabor and the rest of the Padre gang, a refactoring editor that I think will fire the imagination of Perl developers.

And I hope that this corrects and draws to a close one of the most persistently annoying failures in my Perl career.

Please enjoy the features in the new Padre releases as they appear over the next month and a half, leading up to Padre's second birthday party.

I am extremely proud of what the whole Padre gang has achieved in the last year, and I can't wait to stabilise a new group of features just waiting for the background support to firm up a little more.

Wednesday June 09, 2010
12:49 AM

The CPAN just got a whole lot heavier, and I don't know why

According to the latest CPANDB generation run, and the associated Top 100 website at, something big just happened to the CPAN dependency graph.

After staying relatively stable for a long time, with the 100th position coming in at around 145 dependencies and incrementing by 1 every 3 months, the "weight" of the entire Heavy 100 set of modules has jumped by 20 dependencies in the last week or so!

The 100th position is now sitting at 166 dependencies, and perennial leader of the Heavy 100 MojoMojo has skyrocketed to an astonishing 330 dependencies. The shape of the "Jifty Plateau" is also much less distinguishable, which suggests it might be more than just a pure +20 across the board.

The question is why?

Is this caused by the restoration of a broken META.yml uncovering formerly ignored dependencies? Or has someone added a new dependency somewhere important accidentally?

Wednesday June 02, 2010
10:37 PM

Padre::Task 2.0 - Making Wx + Perl threading suck faster

Some time Real Soon Now, I'll be landing the shiny new second-generation Padre Task API (which is used for background tasks and threading control) onto trunk.

A few people have asked me to give a quick high level overview on how it works, so this will be a quick bottom to top pass over how Padre will be doing threading.

The first-generation Padre::Task API was true frontier programming. While some of us knew Wx and others knew Perl's threads a bit, nobody really had any idea how the two interact. Indeed, the proliferation of bugs we found suggests that Padre has really been the first major application to use both at the same time.

Steffen Müller boldly led the Padre team into this wilderness, putting together a solid threading core loosely inspired by the Process API. He nursed a ton of bug reports and tracked upstream fixes in

Some of the code was a bit ugly in places, and the threads burned a ton of memory, but it worked and worked well. It worked well enough that Andrew Bramble was later able to extend it to add Service support for long running background tasks with bidirectional communication.

But while we've reduced some of the worst of the memory problems with the "slave driver" methodology, the API has been at the end of its natural life for a while now. It's hard to write new background tasks without knowing both Wx and threads, which limited access to only three or four people.

The goals for the new Padre::Task 2.0 are threefold.

Firstly, to allow the creation of the three distinct background jobs we need in Padre, in a way that doesn't abuse either Wx or Perl's threads mechanism. These three background job types are Task (Request,Wait,Response), Stream (Request,Output,...,Response) and Service (Request,Input,Output,...,Response).

Second, to allow the implementation to have (or have in the future) the theoretically smallest possible memory consumption beyond the minimum overheads imposed by Perl's threading model.

Third, to allow the creation of Wx + threads tasks without the need to learn either Wx or threads. This should open up background tasks to the main body of Padre contributors, beyond the elites, and spur great improvements in Padre's ability to take full advantage of multi-core developer machines.

A fourth bonus goal is to allow us to migrate Padre's backgrounding implementation to something other than threads in the future, without having to change any of the code in the actual tasks next time around. This should also allow the people that don't like Perl threads and want us to use something else to move their arguments from bikeshedding to actual proof-by-code.

After several months of experimenting, I've gone with a somewhat unusual implementation, but one that is completely workable and meets the criteria.

The core of this implementation is a communications loop that allows messages from the parent thread to a child thread, and back again.

The odd thing about this particular communications loop is that the two halves of the loop are done using utterly different underlying mechanisms.

The parent --> child link is done using Perl threads shared variables, in particular Thread::Queue.

Each Padre thread is created via a a Padre::TaskThread parent abstraction, which governs the creation of the real thread, but also provides a Thread::Queue "inbox" for each thread. This is inspired by the Erlang micro-threading model for message passing, but is way way heavier.

In this manner, the top level task manager can keep hold of just the queue object if it wants, feeding messages into the queue to be extracted at some unknown place in another thread it has no control over.

Once spawned, each worker thread immediately goes into a run-loop waiting on messages from its message queue. Messages are simply RPC invocations, with the message name being a method name, and the message payload becoming method parameters.

Every thread has the ability to clone itself if passed an empty Padre::TaskThread object to host it. This gets a little weird (you end up passing shared Thread::Queue objects as a payload inside of other shared Thread::Queue objects) but the end result is that you don't have to spawn Perl threads from the parent thread, you can spawn them from child threads but retain the parent's ability to send messages to them regardless of the location on the thread spawn graph they are actually running.

The end result of this trickery is that we can replicate the slave driver trick from the first-generation API. By spawning off an initial pristine thread very early in the process, when the memory cost is small, we can create new threads later by spawning them off this "master" thread and retain the original low per-thread memory cost.

And while we don't do it yet, we can be even tricksier if we want. If we have a thread that has had to load 5, 10, or 20 meg of extra modules, we don't need to load them again. Instead, we could choose to clone that child directly and have a new thread with all the same task modules pre-loaded for us.

The second half of this communications loop is the up-channel, which is done using a totally different Wx mechanism. For communicating messages up to the parent, the Wx documentation recommends the creation of different messages types for each message, and then the use of a thread event.

This care and feeding of Wx turns out to be difficult in practice for non-elites, because you end up registering a ton of different Wx event types, all of which need to be stored in Perl thread-shared variables. And each message needs to be pushed through some target object, and the choice of these can be difficult.

Instead, what we do instead is to register a single event type, and a single global event "conduit". As each message is received by the conduit, it filters down to just appropriately enveloped events and pass only those along to the task manager. The task manager removes the destination header and routes it to the correct task handle in the parent.

Again, messages are done as RPC style calls, with a message type being the method to call, and the payload being the params.

This lets you do a reasonable simple form of cross-thread message passing (at least from the child to the parent anyway).

sub in_child {
    my $self = shift;
    $self->message('in_parent', 'Some status value');
sub in_parent {
    my $self   = shift;
    my $status = shift;

We've also put a Padre::ThreadHandle layer over the task itself to do eval'ing and other cleanliness work, in the same way we put a Padre::PluginHandle over every plugin object to protect us against them.

This handle adds some extra value as well, automatically notifying the task manager when a task has started and stopped, and generally keeping everyone sane.

Within this communications loop lives the actual tasks.

The Task API forces work into a very rigid structure. The rigid rules on behaviour is the cost of allowing all the threading magic to happen without the task having to know how it happens.

The basic API looks something like this.

package Some::Task;
use strict;
use base 'Padre::Task';
# Constructor, happens in the parent at an arbitrary time before the job is run.
sub new {
    my $class = shift;
    my $self  = bless { @_ }, $class;
    return $self;
# Get ready to serialize, happens in the parent immediately before being sent
# to the thread for execution.
# Returns true to continue the execution.
# Returns false to abort the execution.
sub prepare {
    return 1;
# Execute the task, happens in the child thread and is allowed to block.
# Any output data should be stored in the task.
sub run {
    my $self = shift;
    require Big::Module;
    $self->{output} = Big::Module::something($self->{input});
    return 1;
# Tell the application to handle the output, runs in the parent thread after
# the execution is completed.
sub finish {
    my $self = shift;
    Padre::Current->main->widget->refresh_response( $self->{output} );
    return 1;

To fire off this task, in the widget which commissioned this background work you just do something like this.

sub refresh {
    my $self = shift;
    require Some::Task;
        input => 'some value',
sub refresh_response {
    my $self   = shift;
    my $output = shift;

As you can see here, at no point do you need to care about threads, or event handling, or that kind of thing. You wrap a task class around the blocking part of your app, have the finish push the answer through to your wx component, and then handle it in the wx component.

Of course, this is still a little fidgety. So I'm currently in the process of adding a secondary layer for the common case where the task is created primarily for the purpose of a single parent owner.

The new API layer simplifies things even more. The task takes an "owner" param that represents the Wx component that commissioned it. It does some weaken/refaddr based indexing magic to map the task to the owner object, and then makes sure there is a default finish method to route the answer automatically back to the owner.

The neat part about this is that it takes care of synchronisation problems automatically. If the task finish is called at a time when the Wx component has been destroyed, the answer is automatically dropped and ignored.

The owner can also explicitly declare that the answers from any tasks currently in flight are no longer relevant, and those will be dropped as well.

With the new helper code, your task is shrunk to the following.

package Some::Task;
use strict;
use base 'Padre::Task';
sub run {
    my $self = shift;
    require Big::Module;
    $self->{output} = Big::Module::something($self->{input});
    return 1;

And the code in the owner component is just this...

sub refresh {
    my $self = shift;
    # Ignore any existing in-flight tasks
    # Kick off the new task
        task  => 'Some::Task',
        input => 'some value',
sub task_response {
    my $self   = shift;
    my $task   = shift;
    my $output = $task->{output} or return;

This is not the 100% final API look and feel, but it demonstrates the volume of code that you will need to write to do something in the background in the Task 2.0 API.

There's also some interesting opportunities to make it smaller again for very simple cases, by using the generic eval task to inline the execution code.

sub refresh {
    my $self = shift;
    # Ignore any existing in-flight tasks
    # Kick off the new task
        task  => 'Padre::Task::Eval',
        input => 'some value',
        run   => <<'END_TASK',
            my $self = shift;
            require Big::Module;
            $self->{output} = Big::Module::something($self->{input});
            return 1;
sub task_response {
    my $self   = shift;
    my $task   = shift;
    my $output = $task->{output} or return;

This would mean we don't need a task class at all, which could be useful in cases where we want to avoid generating lots of tiny 10 line task classes,
or where we want to generate the background code on the fly at run-time.

The downside would be that because the work is bunched inside a single task class, we lose the chance to do worker thread specialisation (where the workers track which tasks they have loaded into memory, so further tasks of the same type can be assigned to them in preference to other threads).

Perhaps we support all the different ways, so that we can pick and choose which option is best on a case by case basis and change them over time. That would certainly fit the Padre rule of "something is better than nothing, don't worry too much about having to break it later".

The downside of this approach, of course, is that breakage. The new Task API will basically kill any plugin currently doing their own background tasks.

The upside, of course, is that once they are fixed these plugins can do those background tasks much more efficiently that before.

The new Task 2.0 API will land in Padre 0.65, which should be out in about 2 weeks.

Wednesday May 26, 2010
09:44 PM

Aspect 0.90 - Aspect-Oriented Programming for Perl, rebooted

After two years of development at $work, we've finally completed our program of stability work. To the great credit of our 7 man Perl team, we've managed to reduce the (predicted and actual) downtime to about 1 hour per year for our billion dollar ecommerce site that you've never heard of.

With stability solved, our focus now turns to usability. And unlike downtime, which is easily measurable, we don't really have any decent metrics for usability. So in preparation for this massive new metrics development push, for about a year now I've been hacking away on, Perl's Aspect-Oriented Programming framework.

My plan is to use this to write performance and metrics plugins for a generic Telemetry system, which would less us turn on and off metrics capturing for a library of different events, driven by configuration on the production hosts.

If you have never encountered Aspect-Oriented Programming before, you can think of it as a kind of Hook::LexWrap on steroids. The core "weaving" engine controls the selection and subjugation of functions, and the generation of the hooking code, and then two separate sugar-enhanced mechanisms are provided on top of this core to define what functions to hijack, and what to do when something we hit one of them.

The main power of AOP is in the ability to build up rich and complex hijacking conditions, and then write replacement code in a way that doesn't have to care about what is being hijacked and when.

This ability to manipulate large numbers of functions at once makes it a great tool for implementing "cross-cutting concerns" such as logging, tracing, performance metrics, and other kinds of functions that would normally result in having a bucketload of single lines scattered all through your implementation.

It also lets you do semi-evil "monkey-patching" tricks in a highly controlled manner, and potentially limited to only one single execution of one single process, without having to touch the target modules all the time everywhere.

For example, the following code shortcuts to false a particular named function only if it isn't being called recursively, and randomly for 1% of all calls, simulating an intermittent failure in the interface to some kind of known-unreliable hardware interface like a GPS unit.

use Aspect;
before {
} call 'Target::function' & highest & if_true { rand() < 0.01 };

While the original implementation of wasn't too bad and did work properly, it had two major flaws that I've been working to address.

Firstly, it was trying to align itself too closely to the feature set of AspectJ. Unlike AspectJ, which does aspect weaving (installation) at compile time directly into the JVM opcodes, Perl's does weaving at run-time using function-replacement technique identical to Hook::Lexwrap.

This difference means there's certain techniques that AspectJ does that we can't really ever do properly, but there are other options open to us that couldn't be implemented in Java like the use of closures.

For example, with you can do run-time tricks like that rand() call shown above. Since the if_true pointcut is also a closure, you can even reach have it interact with lexical variables from when it was defined.

This makes for some very interesting options in your test code, allowing lighter alternatives to mock objects when you only want to mock a limited number of methods, because you can just hijack the native classes rather than building a whole set of mock classes and convincing your main code to make use of them.

use Aspect;
# Hijack the first four calls to any foo_* method
my @values = qw{ one two three four };
before {
    $_->return_value(shift @values);
} call qr/^Target::foo_\w+$/ & if_true { scalar @values };

The second problem I've fixed is the speed. The original implementation made heavy use of object-orientation at both weave-time and run-time.

Logically this is a completely defensible decision. However, from a practical standpoint when you are going to be running something inline during function calls, you don't really want to have to run a whole bunch of much more expensive recursive methods.

So I've restructured much of the internals to greatly improve the speed.

I've dumbed down the inheritance structure a bit to inline certain hot calls at the cost of a bit more code.

I've also implemented a currying mechanism, so that run-time checks can safely ignore conditions that will always be true due to the set of functions that ended up being hooked.

And finally, rather than do method calls on the pointcut tree directly, we now compile the pointcuts down to either a string parameter-free equivalent functions.

In the ideal case the entire pointcut tree is now expressed as a single-statement Perl expression which makes no further function calls at all.

This not only removes the very expensive cost of Perl's function calls, but having everything in a single expression also exposes the boolean logic to the lower-level opcode compiler and allows culling and shortcutting at a much lower level.

With these two major improvements in place, I've also taken the opportunity to expand out the set of available conditions to include a number of additional rules or interest to Perl people that don't have equivalents in the Java world like tests for the wantarray'ness of the call.

use Aspect;
# Trap calls to a function we know behaves badly in list or void contexts
before {
    $_->exception("Dangerous use of function in non-scalar context");
} call 'Target::function' & ! wantscalar;

In addition to the expanded functionality, I've also massively expanded the test suite and written up full (or at least nearly full) POD for all classes in the entire Aspect distribution.

To celebrate this near-completion of the first major rewrite phase, I've promoted the version to a beta'ish 0.90.

There's still some work to do before release 1.00. In particular, at present the point context objects still work to a single unified set of method calls, regardless of the advice type.

This means that advice code can attempt to interrogate or manipulate things that are irrelevate or outright illegal in certain contexts.

Cleaning this up will elevate these bad context usages to "method not found" exceptions, and will also allow prohibiting the use of pointcut conditions in contexts where they are meaningless (such as exception "throwing" clauses for before { } advice when no such exception can exist).

It should also allow faster implementations of the point context code, because it won't need to take multiple situations into account and can specialise for each case.

Once this is completed, and some additional test scripts have been adding for particularly nasty cases, I'll be able to do the final 1.00 release.

Sunday May 23, 2010
06:58 PM

Add safety to test scripts for File::HomeDir applications

I always feel a bit depressed when I go on a long geek/coding trip and don't come back having released something.

Since my work is slowly drifting more towards architecture and strategy where I do less coding than I used to, writing something this trip has been tough.

Fortunately, with a few hours to kill at Seattle airport I've managed to knock out File::HomeDir::Test, which is a way to test applications that use File::HomeDir safely and without spewing files into places that the installed application will discover later.

The pattern in your application is just the following...

use File::HomeDir::Test;
use File::HomeDir;

The first line creates a temporary directory that lives for the duration of the test scripts, ads hijacking flags for the base File::HomeDir load, and hijacks $ENV{HOME} to point to the temp directory.

After you have loaded File::HomeDir::Test (in a way that results in ->import being called) File::HomeDir should just work as normal.

It has been released as File::HomeDir 0.91. This release also sees the landing of the driver in a production release for the first time, so please report any problems you may see on Linux.