After spending Day 1 mostly exploring the boundaries of NQP, I was hoping to put the pedal to the metal and start the Parrot Plumage implementation in earnest. I ended up with more bald yaks instead.
During the first day I discovered that I needed two features added to NQP to make progress on the ecosystem tools: the ability to do cross-language
eval (a prime raison d'être for Parrot), and the ability to declare object attributes directly for proper OO (NQP having made do so far with implicit or PIR-coded attribute definitions). Attribute declaration was not ready yet, but Tene++ had produced a (surprisingly simple) implementation of cross-language
eval, so I decided to push on in that direction.
The first thing I wanted to do was parse JSON data using the existing JSON parser that ships with Parrot. Unfortunately, it had been some time since the JSON parser had been updated, and it was still conforming to an older compiler API. At the raw PIR level, the old API looks like this:
.local pmc json, data
json = compreg 'JSON'
data = json(text)
The old API is still fully functional for doing work with just one language (plus PIR, which is always available), but it doesn't support working with multiple high-level languages in the same program. Thankfully, the new API involves only minor changes:
.local pmc json, code, data
json = compreg 'data_json'
code = json.'compile'(text)
data = code()
Essentially, the new API makes just two changes. First, the compiler is loaded using the
load_language op, rather than the more generic
load_bytecode op. Second, rather than the compiler being a simple subroutine called directly on the source text to produce a final result, a compiler is now an object with a
compile method that converts source text into a subroutine representing the "program". Since JSON is a non-executable language, this subroutine merely creates and returns the data structure representing the JSON text -- so the last step is to call the subroutine to get that data structure.
Updating the JSON parser to the new API would have been relatively simple, but for one problem -- the new API requires the compiler have a lowercase name. Those of us with some experience dealing with cross-platform development will immediately blanch upon discovering such a requirement and realizing (as in this case) that the existing JSON compiler not only used an uppercase name internally, but some of the files that implemented it were uppercased in source control, while others weren't. Oops.
After some discussion on
#parrot, we decided the least havok for our users meant copying the compiler to a new name and deprecating the old one. Since we needed a new name, and "compilers" for data-only languages are somewhat special, we came up with the informal convention of prefixing the data format's name, in lowercase, with
data_json was born.
Some hacking later, I discovered a few namespacing issues in the original JSON compiler; fixing those allowed
data_json to finally work from high-level languages such as NQP, as well as from multiple namespaces in PIR. Leaving some final details (such as converting any existing tests) for another day -- or another enterprising coder, hint hint -- I went on to the next task.
Using the ecosystem tools should be as easy as possible for the user. Rather than this:
/path/to/NQP/compiler/nqp.pbc plumage.nqp install foo
I'd much rather have this:
plumage install foo
Thus the next task was to figure out how to produce a proper executable from the
plumage.nqp source. Some generous cargo culting, a (sortof) proper
Makefile, and a judicious hack later, I could produce a working
plumage executable for a trivial bit of NQP.
Sadly, by that point, my available time for the hacking session had run out, so we'll see what the next session brings. If you'd like to help, discuss interactions with other projects, or even just ask questions, come by
irc.perl.org and ping me (
japhb). See you there!