Thanks to two handy references from Autrijus (http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem and http://www.haskell.org/tmrwiki/WhyAttributeGrammarsMatter), I finally learned what an attribute grammar was, and why Allison seems so obsessed with them. They do kick a good deal of butt.
In my excitement, I implemented Language::AttributeGrammar-0.01 (and 0.02 and 0.03
The cool thing about my implementation is that it can work on sparse, loosely-connected data structures. The module's algorithm assumes nothing about the connection of a parent to its child, so it could be "go look a url up from a database and fetch it from the web". The syntax prefers either method calls or hash access, however.
It is implemented lazily and demand-driven. The good side of that is that it doesn't compute more than it needs to (and it can be a tidy O(mn) for m the number of attributes and n the number of nodes). The bad side of that is that you can't dive into the data structure later on and find out what an attribute of a particular node was, because the lazy algorithm may not have found a path to it yet. That means you have to generate a new data structure with the information you want and return it. There is a way to remedy this ("semi-lazy"), and it requires much the same structure as automatic attribute generation (that is, telling the module what your data structure looks like), so I think those two features will come together.
Anyway, the module's pretty cool, and I was quite impressed in how simple it was to implement. Check it out.