Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Journal of luqui (5770)

Monday October 03, 2005
11:32 AM

Memory-saving lazy attribute grammars

[ #26991 ]

Okay, well, my lazy implementation in Language::AttributeGrammar is really memory hungry. Not only is there the O(mn) overhead usually present in lazy implementations, all data is kept around until the end of the computation. This is silly (but it is still a trade-off).

I have two options for making a reasonable implementation.

1) Use a proper lazy approach that knows how to clean up after itself.
2) Compile the grammar using Kastens's algorithm.

I'll demonstrate option 1 by showing the Branch visitor in the repmin problem:

sub Branch::visit {
    my ($self, $globalmin) = @_;
    my ($left_min, $left_result)   =  $self->{left}->visit(thunk { $globalmin->call });
    my ($right_min, $right_result) = $self->{right}->visit(thunk { $globalmin->call });
    return (thunk { min($left_min->call, $right_min->call) },                 # min
            thunk { Branch->new($left_result->call, $right_result->call) });  # result

Each of the $left_min etc. are thunks. When you ->call a thunk, it runs the closure associated with it and then overwrites the closure with the value that the closure returned. It's the standard lazy evaluation technique.

Now you say, "$left_min? Why not $left->{min}? The code would be clearer." (Also importantly -- and this is the trade-off -- you wouldn't have to know the types of the child nodes). Well, it's about memory. If I put all the thunks in a hash, then the values they return will be cleaned up after I run all the thunks (exhausting the pointers to the lexical pad). However, if I put them in lexicals, then the values will be cleaned up incrementally, as soon as the computation doesn't need them anymore.

For instance, you don't need the local minimum to build the result, so as soon as the local minimum of this branch is returned, there are no more references to $left_min and $right_min and they can be cleaned up. This is more important when you're passing around large data structures, which attribute grammars often do.

The cool thing about this new lazy formulation (which isn't new at all, UUAG does it), is that it's time and memory complexity is exactly the same as Kastens's algorithm. There is linear overhead in the number of nodes, but that has the same complexity as the attribute values themselves. You just allocate them a little earlier. So all that's left are the details.

And that brings me to option 2. The details could be important. Kastens's algorithm is less flexible than the lazy algorithm above because it needs to know more about the input. Kastens's algorithm, though, can compile to an abstractionless representation, where pretty much the only operations are "bsr" and "execute attribute body". In Perl, this will make a little difference, but not one noticable (I may benchmark to back up/invalidate this claim). However, if you are compiling to C or a JIT, this could make a really really huge difference. If Perl 6 gets attribute grammar support (s/If/When/, just a question of whether it's shipped or module), then it's probably a good idea to use Kastens's algorithm, because we can then compile to JITted parrot (or even C).

However, all the lifetime analysis, visitation dependencies, and hard-to-compute bullshit like that in Kastens's algorithm are automatically handled using thunks and garbage collection. That means that a lazy approach is significantly easier to implement. Also, the lazy approach can accept more grammars; that is, grammars that are not "ordered attribute grammars" as in Kastens's paper. But the class of ordered attribute grammars is already quite large, so that's probably not an issue.

I think I'll go with option 1, mainly because it's easier to implement. Funny how big a factor that can be, and how little people understand how big a factor that is.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.