Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

tsee (4409)

tsee
  {smueller} {at} {cpan.org}
http://steffen-mueller.net/

You can find most of my Open Source Perl software in my CPAN directory [cpan.org].

Journal of tsee (4409)

Friday November 13, 2009
03:00 PM

Long time no hacking!

It's been a while since I could do some proper hacking on fun code. Whenever I had
spare time recently, I had to spend it on maintenance work. It's been even longer
since I could hack on new features for Padre. Over five months, to be exact. In
such a long time span, it's easy to forget lots of details about the code and its
design. That is the case particularly for such a quickly evolving project.

After my first glance at the code today, I felt like I was starting from zero.
But in the end, I managed to implement a couple of new features that I'm quite satisfied
with. This gentle slope of a learning curve demonstrates one of the enormous
strengths of the project: Most of the code is quite straightforward and easy to understand[1].
Newbies get up to speed quickly and adding features is a breeze. It's simply fun.

Within a few hours, I improved on Gabor's recent "Find method declaration" feature
in that it can now use the "perltags"[2] file that is already used for class name
and method auto-completion. This makes the determination of the file in which
the selected method was declared a lot more robust. Too bad we can't look into
Perl scalars with static analysis. This currently works only for class method syntax
a la class->method. It's still an order of magnitude faster to use than
searching files manually.

Next on my list was adding an XS document type with "calltips" support for the
perl interpreter API. Now, when you open an XS file in Padre, you will, among other
horrors, get shitty syntax highlighting because it's only recognized as a C source
file. But at least Padre can now show you the documentation of the perlapi macro
that you're hovering your cursor over. Trust me, that can be a salvation for
misremembering, constantly perlapi-rereading and misreading XS neophytes like
myself. We have a long way to go to make Padre a useful editor for XS and C,
but I'm optimistic. It just so easy to get started that I can relearn the code
base once every few months [3].

--Steffen

[1] "Easy to understand" doesn't necessarily mean that it's all great or elegant
code. The code quality in the project varies greatly. Usually, the initial, hackish
implementations are redesigned and refactored into good APIs as the need arises.

[2] Think "exuberant ctags", but for Perl. Generated with the Perl::Tags module.
Thanks to osfameron for promptly uploading a new version.

[3] Next up, I'd like to add multiple copies of the perlapi docs. One for each
release of perl. The user would then select the version of perl he's developing
for and get the correct documentation automatically. Eventually including all the
backported constructs from ppport.h. Help welcome.

Sunday September 13, 2009
03:09 AM

Users, speak up!

A few days ago, Gabor Szabo has put out a request for help packaging Padre. Since then, a few people stopped by the #padre channel on irc.perl.org willing to help with packaging Padre using PAR. After a short introduction, some stated something like "I can help with packaging Padre because we packaged a very large and complex $work application using PAR::Packer. It required a few hacks for the worst external dependencies, but apart from that it was as easy as pie."

This is great for several reasons. It means people are considering Padre important enough that they are willing to help out and share their expertise in packaging applications with PAR. But it is also one of the few occasions that I have seen folks publicly acknowledge the usefulness of the PAR toolkit. I regularly get private mail from people asking for help with packaging their big-ass $work applications because it's grown over their head. Unfortunately, those people generally also ask for a bit of confidentiality as well. So I know PAR is being used all over the place, in large and small applications alike, scaling up to tens of thousands of computers. But I mustn't point at a list of companies who use it for deployment and do the "this is enterprisey software, worship it" dance.

So dear PAR users. Please share with the world that you use it and maybe how you do. You're enjoying a good, free tool. The least you can do is give (public) feedback and share your experience with others. The same applies if you're not enjoying it at all. Let us know so we can improve it!

Cheers,
Steffen

PS: Most of this post would probably equally apply to many other successful modules. If you have stories about those, please share them as well!

Wednesday September 02, 2009
06:09 AM

Invaluable advice

Here's some invaluable advice:

If you have been hired to fill a non-technical position, do not let anyone know that you possess any kind of technical skills.

Do not let your laziness lure you into giving up clues. For example, do not offer to your admin that you can just fix something yourself given the appropriate permissions. Just bite the bullet and wait like the next guy.

Seems selfish? Say that when they start to queue for help. Even if you like to help and manage to serve all your friendly colleagues well within a reasonable amount of time and effort, the hidden cost shows up only later. It's when people higher up in the food chain realize that it is most efficient for the division to have the most technically skilled person carry most of the technical maintenance (and/or development) burden. Prepare for a drag if that happens.

Of course, I don't expect anyone to take this at face value, but it's sad that there's some sort of twisted truth to it.

Sunday July 19, 2009
12:02 PM

Reusable packaged applications with PAR::Packer

PAR::Packer is a tool that can (among other things) create stand-alone binary executables from Perl scripts. It scans the code for dependencies and includes them in the executable.

Until now, it wasn't possible to reuse these perl-installation-in-one-file packages to run Perl code that wasn't part of the package. This proved to be a bit of a problem in some cases because many Perl applications expect to be able to simply use $^X or a verbatim "perl" executable to execute another instance of the same perl. For this reason, I just implemented the --reusable option to pp, the PAR Packager command line tool. Since this feature may have dubious security implications, it is disabled by default. To use it, you do this:

  pp --reusable -o myapp myapp.pl
  # works normally:
  ./myapp

  # runs otherapp.pl providing the module library that's part of the executable:
  ./myapp --par-options --reuse someOtherApp.pl

If you try to use the --par-options --reuse option with an application that wasn't packaged to be --reusable, it will refuse to run.

The new feature requires a new PAR and PAR::Packer release. PAR 0.993 has been uploaded to CPAN. For PAR::Packer, you need the development version 0.992_01. If either one of those isn't available from your favourite mirror yet, you can find them here temporarily.

Cheers,
Steffen

Saturday July 04, 2009
06:29 AM

In defense of Perl ithreads

People like to point out the problems with Perl's threading, say they're simply the Windows fork-emulation ported to other operating systems and conclude that they're of no use otherwise. They generally omit mentioning the cases in which Perl ithreads are the only viable solution for concurrency in Perl.

First, you have to understand the i in ithreads. Read: interpreter threads. Each ithread in your Perl program has its own copy of the whole interpreter. Nothing is shared between the interpreters by default*. Most other threading implementations work the other way around: By default they share everything and the user has to deal with locking of any shared resources. This has many advantages over ithreads. Most obviously, an ithread takes a lot more memory. Furthermore, passing data between ithreads is rather painful and very, very slow. But there is, unfortunately, a big downside to shared-by-default:

Real concurrency (i.e. multiple threads executing concurrently on multiple CPUs) doesn't seem to be feasible with the shared-by-default approach in a language such as Perl. This is because almost all operations -- including those who seem to be entirely read-only -- can potentially modify the data structures. Use a scalar that contains an integer in a string-context and the scalar will be modified to contain a char*. Marc Lehmann explained this in more detail in his talk "Why so-called Perl threads should die" at the German Perl Workshop 2009. (Couldn't find his slides online, sorry.) As far as I know, the typcial dynamic programming languages other than Perl only have (non-concurrent) cooperative multi-threading to start with.

Now, some will be quick to point out that ithreads are a mere fork() reimplementation with quite a few disadvantages. For a real fork, the kernel can do COW and non-thread-safe XS modules aren't a problem. But if your software has to run on Windows, the fork's a non-starter. As mentioned earlier, threads are used for the emulation of fork() on Windows. That means if you use fork(), you'll get multiple processes on systems which support it natively and multiple (i)threads on Windows with all the associated problems regarding memory use and thread-safety. If you're writing software predominantly on Linux, would you rather debug problems in your development environment or on your customer's (or generally user's) machine? I thought so. There is a case to be made for consistency.

The other big contender is the excellent Coro module (or an event loop). I suggest you have a look at its documentation to understand what it does exactly**. The downside? It's cooperative multi-threading. It doesn't really run concurrently. The code in each Coro has to cede control to the other coroutines regularly. If there is some code that's not directly under your control and takes a long time, your GUI or what not will be blocked. If you think about it a bit, you'll realize that heavy re-use of code from CPAN and cooperative multi-threading is a non-starter. In my case, I needed to parse Perl code using PPI. That can take seconds...

I'm all ears for suggestions on how to do concurrency right in a dynamic language. (here: Perl, but if another language does it better, that'd be interesting, too.)

The requirements are:

  • Real concurrency, i.e. using multiple cores.
  • Non-cooperative multi-threading due to code that is not under my control
  • Portability
  • A point I haven't touched in my rant: Ability to recover from crashes. You can recover from crashing ithreads.

* This statement may become painfully untrue if you use a non-thread-safe XS module, unfortunately.

** I'm not restating what it does in my own words because I'd expect them to be slightly inaccurate thus provoking eternal damnation.

Sunday June 14, 2009
11:50 AM

More refactoring goodness

Writing code that modifies code is a difficult task. Writing code that modifies Perl code is a horrible task. Thankfully, writing Perl code that modifies Perl code is not quite as horrible as it could be, thanks to Adam Kennedy's PPI.

One stated goal of the Padre project is to provide refactoring tools for Perl code as well as reasonably possible. So far, there's shortcuts for replacing a variable in its lexical scope, finding a variable's declaration (be it a lexical, file-scoped (our), or package variable declared with 'use vars'), finding unmatched braces, and aligning code blocks on operators. These features are all useful, but they're only a subset of what more mature projects like Eclipse provide. A recent post on perlmonks discusses some examples of refactoring tools (or strategies) and their applicability to different languages. One of these is the Introduce Explaining Variable pattern. It's now implemented in Padre trunk. It's really quite simple, let me explain with an example:

The following code implements the derivative of the atan2 function. The code is from the Math::Symbolic::Derivative module. (I wrote it, so I'm complaining about my own cruft.) This basically implements the equation that is shown in the highlighted comment.

sub _derive_atan2 {
        my ( $tree, $var, $cloned, $d_sub ) = @_;
        # d/df atan(y/x) = x^2/(x^2+y^2) * (d/df y/x)
        my ($op1, $op2) = @{$tree->{operands}};

        my $inner = $d_sub->( $op1->new()/$op2->new(), $var, 0 );
        # templates
        my $two = Math::Symbolic::Constant->new(2);
        my $op = Math::Symbolic::Operator->new('+', $two, $two);

        my $result = $op->new('*',
            $op->new('/',
                $op->new('^', $op2->new(), $two->new()),
                $op->new(
                    '+', $op->new('^', $op2->new(), $two->new()),
                    $op->new('^', $op1->new(), $two->new())
                )
            ),
            $inner
        );
        return $result;
}

Now, this is pretty hard to read. The $op1 and $op2 variables correspond to the function operands y and x respectively. $d_sub is a closure that can derive recursively. The two templates are simply a shorthand so I didn't have to write someclass->new(...) repeatedly. To make x and y more apparent and to name $d_sub more fitting to its purpose, I open up the file in Padre, right-click each of those variables, select Lexically Replace Variable from the context menu, and provide the new names. Similarly, I replace $inner. This yields:

sub _derive_atan2 {
        my ( $tree, $var, $cloned, $derive ) = @_;
        # d/df atan(y/x) = x^2/(x^2+y^2) * (d/df y/x)
        my ($y, $x) = @{$tree->{operands}};

        my $inner_derivative = $derive->( $y->new()/$x->new(), $var, 0 );
        # templates
        my $two = Math::Symbolic::Constant->new(2);
        my $op = Math::Symbolic::Operator->new('+', $two, $two);

        my $result = $op->new('*',
            $op->new('/',
                $op->new('^', $x->new(), $two->new()),
                $op->new(
                    '+', $op->new('^', $x->new(), $two->new()),
                    $op->new('^', $y->new(), $two->new())
                )
            ),
            $inner_derivative
        );
        return $result;
}

Of course, that leaves the giant expression intact which actually calculates the result. It makes sense to add a few more temporary variables with descriptive names. I select $op->new('^', $x->new(), $two->new()) in the above version of the code, right-click, and select Insert Temporary Variable. Then I type the name of the new variable $x_square. Padre finds the beginning of the current statement for me and inserts a temporary variable declaration for $x_square at that point. It also replaces the selected text with $x_square. I manually replace another occurrance of the new temporary and then select $op->new('^', $y->new(), $two->new()) and have it replaced with $y_square accordingly. There's more that can be cleaned up, but this handful of clicks and practically no typing has improved the code's readability considerably:

sub _derive_atan2 {
        my ( $tree, $var, $cloned, $derive ) = @_;
        # d/df atan(y/x) = x^2/(x^2+y^2) * (d/df y/x)
        my ($y, $x) = @{$tree->{operands}};

        my $inner_derivative = $derive->( $y->new()/$x->new(), $var, 0 );
        # templates
        my $two = Math::Symbolic::Constant->new(2);
        my $op = Math::Symbolic::Operator->new('+', $two, $two);

        my $x_square = $op->new('^', $x->new(), $two->new());
        my $y_square = $op->new('^', $y->new(), $two->new());

        my $result = $op->new('*',
            $op->new(
                '/', $x_square, $op->new('+', $x_square, $y_square)
            ),
            $inner_derivative
        );
        return $result;
}

Thus Padre helps me refactor crufty code with ease. Many more of these tiny helpers are planned. Stay tuned!

PS: If this didn't convince you, maybe you should just give it a shot. I had to wrestle use.perl for hours to get it to add the highlighting in the example code. If I could add screenshots of the real thing...

Friday June 05, 2009
12:18 PM

Expanding products of sums

A few years ago, when I started studying physics, I wrote a set of modules for representing and dealing with algebraic expressions in Perl: Math::Symbolic. It's not a beauty, but it can be quite useful.

Occasionally, I'm getting mail from people who want it to perform the tasks of a full computer algebra system such as Mathematica. The short answer is: It's not even close to such a thing, it never will be, and was, in fact, never meant to be. One of the most frequent questions I get is a variation of:

"How can I expand this product of sums into a sum of products using Math::Symbolic?"

Here again, the answer is it can't do that out of the box. But since I've been asked so many times, I wrote two implementations of that which you'll find below. This is to prevent anyone asking me ever again :)

The first implementation is really simple and I'd almost call it elegant. It is, however, also quite slow.

use strict;
use warnings;
use Math::Symbolic qw/:all/;
use Math::Symbolic::Custom::Transformation qw/:all/;

my $function = parse_from_string(<<'HERE');
(a + b)*(d + e + f)
HERE
#(b + c + d + e + f)*(a + b)*(d + e + f)*(a + b + c + d)*(a + b + c + d + e)

print "Before: $function\n";

my $pattern = Math::Symbolic::Custom::Pattern->new(
  parse_from_string('(TREE_x+TREE_y) * TREE_z'),
  commutation => 1,
);

my $expand = new_trafo(
   $pattern => 'TREE_x*TREE_z + TREE_y*TREE_z',
);

while (1) {
  my $result = $expand->apply_recursive($function);
  last if not defined $result;
  $function = $result;
}

print "After: $function\n";

It uses the Math::Symbolic syntax itself to define the logic. Most of the work is actually done by the pattern matching and transformation modules Math::Symbolic::Custom::Pattern and Math::Symbolic::Custom::Transformation. The Pattern class defines search rules that can be matched against the expression's tree. The Transformation specifies rules to replace it with. Kind of like regular expressions. Just not as good (or fast).

The second implementation is likely much more useful and certainly a lot faster (but not optimized). It implements almost all of the logic manually and is based somewhat on Mark Jason Dominus wonderful iterator from Higher Order Perl. (Go, buy the book if you haven't. It's an utterly enjoyable read.)

use strict;
use warnings;
use Math::Symbolic qw/:all/;

my $function = parse_from_string(<<'HERE');
(a + b)*(d + e + f)
HERE
#(b + c + d + e + f)*(a + b)*(d + e + f)*(a + b + c + d)*(a + b + c + d + e)

# First, split the product into sums
my @sums = split_formula( B_PRODUCT, $function );

#print "$_\n" foreach @sums;

# Split each sum into its sub-terms
my @terms = map {
  [ split_formula( B_SUM, $_ ) ]
} @sums;

my $n_terms = 1;
$n_terms *= @$_ for @terms;
print "Calculating all $n_terms terms...\n";
print "@$_\n" foreach @terms;

# This calculates the full formula in memory and stores it in $function
# $function = multiply(\@terms);
# print $function, "\n";

# This calculates each term and then prints it to STDOUT, but doesn't
# store it because memory is scarce
multiply_print(\@terms);

# We have to keep in mind that the formula is really a tree.
sub split_formula {
  my $optype = shift;

  my @formulas = @_;

  my @split;
  while (@formulas) {
    my $f = shift @formulas;
    if ($f->term_type == T_OPERATOR and $f->type == $optype) {
      push @formulas, @{ $f->{operands} };
    }
    else {
      push @split, $f;
    }
  }

  return @split;
}

# all of the following is based on the iterator
# pattern of Mark Jason Dominus' "Higher Order Perl", p. 128ff
sub multiply {
  my $terms = shift;
  my ($max, $count) = make_pattern($terms);

  my $func = make_product($terms, $count);
  return $func unless increment($max, $count);

  while (1) {
    my $prod = make_product($terms, $count);
    $func += $prod;
    last unless increment($max, $count);
  }

  return $func;
}

sub multiply_print {
  my $terms = shift;

  my $iter = make_term_iterator($terms);

  my $first = 1;
  while (1) {
    my $prod = $iter->();
    last if not defined $prod;
    if ($first) {
      $first = 0;
      print $prod;
    } else {
      print " + " . $prod;
    }
  }

  print "\n";
  return;
}

sub make_term_iterator {
  my $terms = shift;
  my ($max, $count) = make_pattern($terms);

  my $empty = 0;
  return sub {
    return() if $empty;
    my $func = make_product($terms, $count);
    $empty = !increment($max, $count);
    return $func;
  };
}

sub make_product {
  my $terms = shift;
  my $count = shift;

  # Note: One *could* save some CPU cycles by not cloning here (new).
  # BUT that may lead to fun debugging and interesting memory cycles
  # if you intend to actually use the tree.
  my $prod = $terms->[0][ $count->[0] ]->new;
  foreach my $i (1..$#$terms) {
    $prod *= $terms->[$i][ $count->[$i] ]->new;
  }
  return $prod;
}

sub increment {
  my $max = shift;
  my $count = shift;

  my $i = $#$count;
  while (1) {
    if ($count->[$i] < $max->[$i]) {
      $count->[$i]++;
      return 1;
    }
    else {
      $count->[$i] = 0;
      $i--;
    }
    if ($i < 0) {
      return();
    }
  }
}

sub make_pattern {
  my $terms = shift;
  my @max;
  my @pattern;
  foreach my $set (@$terms) {
    push @max, $#$set;
    push @pattern, 0;
  }
  return \@max, \@pattern;
}

I bet you can see why that second implementation doesn't give me as much of a warm, fuzzy feeling.

Cheers,
Steffen

Monday May 11, 2009
01:35 PM

Padre usability improvements

On the week-end, I finally implemented a few simple usability improvements for Padre, the Perl IDE.

With the recently released version 0.35, Padre supports a context (right click) menu that's actually context sensitive! (No shit!) If you right-click on a variable, Padre will offer additional options in the context menu that let you jump to the declaration of the variable or replace all occurrences of a lexical(!) variable.

Additionally, we stole a nice feature from Eclipse/Perl: If you hold Ctrl while left-clicking on a variable or subroutine name, the focus will jump to the respective definition. All this only works in the current file, unfortunately, but eventually, this will be a project-wide feature.

Today, a ticket in the bug tracker from Peter Makholm prompted me to implement an API for plugin authors. Your plugin can now provide different context menus depending on the document type, code at the cursor, additional modifier keys, or moon phase.

Update: Here's a simple example that you can copy&paste to your plugin to extend the context menu with a simple item


sub event_on_context_menu {
        my ($self, $doc, $editor, $menu, $event) = @_;

        $menu->AppendSeparator();
        Wx::Event::EVT_MENU(
                $editor,
                $menu->Append( -1, Wx::gettext("Fun") ),
                sub { warn "FUN!" },
        );
        return();
}

Cheers,

Steffen

Saturday March 28, 2009
05:32 AM

How Padre saved my day

Every moderately proficient Perl programmer will eventually be faced with the horror that is old code written by people who still thought golf was good programming style. Very recently, my worst experience until now was with code that had the friendly warning "use 5.002;" in the first line. As if that wasn't enough to scare the hell out of me, I was told the code had been written only in 2006 or so. Not just that: It had been devised by somebody I highly respect and know to be extremely intelligent. But an individual who simply hadn't known Perl when he wrote the program.

Here's one of the biggest downsides of the language in action. Somebody who isn't proficient but smart and creative will be able to craft complicated programs that (kind of) serve their complicated purpose and won't be readable by anyone but their inventor. Hackers who know the language well could do the same, but they know better ways to solve the problem at hand than resorting to unnecessary cleverness.

At this point, you already know the program in question doesn't use strictures. Instead, it does interesting stuff like using the file handles (GLOBs!) of literal name "0" to N to process N+1 files synchronously or using $_ implicitly in a scope that spans way over 100 lines. Variables are named appropriately as $a1, $a2, $a3, $a4 and $aa1, $aa2, $aa3, $aa4. But I mustn't forget my favorite: $hwww!

If you've ever had to deal with a complicated program that uses only globals, you will most certainly agree that the first step to understanding it is to declare those variables lexically in the tightest scope possible. That isolation of contexts makes it a damn sight easier to grok what's happening.

But I digress. This is about how Padre helped me fix this. I'd love to say that I simply opened the document in the editor, positioned my cursor on one of those pesky pseudo-globals and hit the "convert this global to a lexical in the tightest sensible scope" action in the Perl-specific-features menu. You know, it would walk the scope tree from the occurrence of the variable I highlighted and find the tightest scope that contains all occurrences of the variable and declare it there. Furthermore, if it's used as a loop variable a la for($i=0;$i<...;++$i), it'd detect that its value likely not depended on outside the loop and declare it there for me, too. But I haven't had the time to actually write that feature yet*.

I still had to figure out the scope of each variable manually. Instead, once I had declared a variable *somewhere*, I could simply hit "replace this lexical variable" in the aforementioned Perl menu and have all occurrences (including "${aa1}" in strings) replaced with a less meaningless name. This was particularly useful for loop variables which tend to be reused in different scopes and thus meanings. A normal search/replace would require user interaction to stop it after the current section of the code. One distraction less while trying to understand some complicated piece of code.

But this isn't really how Padre saved my day. It's that when this heavy use of the lexical replacement feature triggered a couple of bugs in it, I was able to dive into the implementation head-first and simply fix it. It's just Perl and most of it is actually quite accessible! That's how Padre made my day less miserable. it helped my fix that ugly code and gave me the warm, fuzzy feeling of being in full control of my tools and particularly being able to improve them when I need to.

* The key here is: I could! So could you or any other Perl programmer.

Wednesday February 04, 2009
01:50 PM

PAR::Repository with static dependency resolution

My previous journal entry was about the PAR::Repository auto-upgrading feature. That, however, was just a precursor to the big news. Here's (approximately) what I posted to the PAR mailing list a few days ago:

Let me provide some context. Ever since I wrote PAR::Repository::*, people mistook it for a PAR-based PPM replacement. It was never intended to be a package manager/installer like PPM but instead as a sort of application server that could be comfortably and centrally managed, maintained, and upgraded. Even having separate staging and production repositories is quite simple as a PAR repository is just a directory on an ordinary web server or file system. Heck, you can even import one into git and switch branches as your heart desires. Since the clients simply fetch the most current packages for their specific needs, they are always be up to date when launching a new application from the repository.

After I gave a talk about PAR and the repository concept at YAPC::EU 2008 in Copenhagen, people again asked whether they could use a PAR repository in place of PPM. I said they couldn't and that the fundamental difference is that PAR::Repository finds dependencies dynamically, recursively, at run-time, whenever a module is required as opposed to PPM's static dependency information. But at the time, I already had a secret scheme for adding static dependency information to PAR repositories. Since the work on PAR is done purely in my not so copious spare time, I didn't spill the beans just yet in case I'd never get around to finish it. Seems I was lucky.

Since a couple of days ago, there are development releases of PAR, PAR::Dist, PAR::Repository, PAR::Indexer and PAR::Repository::Client[1] that sport support for static dependency extraction from .par files, storage thereof in the repository, and resolution and application of it in the client!

Getting to this point required a bit of Yak Shaving.

  • PAR::Dist's merge_par routine formerly simply copied the first package's META.yml. Now, it also merges the "provides" as well as the various requires-like sections.
  • The tests of both PAR::Repository and ::Client were in dire need of improvement because...
  • ... both modules needed some refactoring to make way for the rest of the changes.
  • The PAR::Repository aquired a new index file for dependencies.
  • The PAR file injection routines use the information from META.yml to fill it. The removal routines correctly remove the information again.
  • To make up for the extra bandwidth required for the dependency information, a checksum-scheme has been implemented to check for updates.
  • The client has a new option "static_dependencies" to enable recursive resolution of dependencies as found in the new dependencies index.
  • The "use PAR {...}" interface now has a "dependencies" option that enables the client's static dependency processing.

All involved modules have new releases on CPAN. They are mostly developer releases, since there must be serious bugs.

Thanks for reading!

Best regards,
Steffen

[1] To give the new releases a whirl, you can simply install PAR::Repository (for the server side) or PAR::Repository::Client (for the client, doh). No need to manually install all the distributions, they'll be picked up as dependencies.