cyocum's Journal http://use.perl.org/~cyocum/journal/ cyocum's use Perl Journal en-us use Perl; is Copyright 1998-2006, Chris Nandor. Stories, comments, journals, and other submissions posted on use Perl; are Copyright their respective owners. 2012-01-25T02:35:51+00:00 pudge pudge@perl.org Technology hourly 1 1970-01-01T00:00+00:00 cyocum's Journal http://use.perl.org/images/topics/useperl.gif http://use.perl.org/~cyocum/journal/ Demystifying Ocaml's Functors http://use.perl.org/~cyocum/journal/40362?from=rss <p>Since I started using Ocaml, functors have been rather mystifying to me. They are Ocaml's highest form of type abstraction through modules and, for beginners like myself, they can be down right frustrating to understand. So, I took it upon myself to learn how to write one of these things by hook or by crook. Here is what I took away from that experience.</p><p>What will do here is take you through a trivial and probably completely useless example of a Functor which creates types of Linked Lists. You will need to know about Ocaml before you dig into this so I recommend having a look over <a href="http://www.ocaml-tutorial.org/">here</a>.</p><p>So, first you need to have a module type for which you will create a functor:</p><blockquote><div><p> <tt>module type S =<br>sig<br>&nbsp; type t<br>end;;</tt></p></div> </blockquote><p>This creates a module type with only one thing in it: a generic type expression. Basically, this says: if a module is of type S then it will have a type t defined in it.</p><p>Now for the functor itself:</p><blockquote><div><p> <tt>module Make (LinkedList : S) =<br>struct<br>&nbsp; type t = Empty | Node of LinkedList.t * t ref<br> <br>&nbsp; let make () =<br>&nbsp; &nbsp; ref Empty<br> <br>&nbsp; let insert x ll =<br>&nbsp; &nbsp; let new_node = Node(x, ref Empty) in<br>&nbsp; &nbsp; &nbsp; match new_node, !ll with<br>&nbsp; &nbsp; &nbsp; Node(_, next), Empty -&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; next<nobr> <wbr></nobr>:= Empty;<br>&nbsp; &nbsp; &nbsp; &nbsp; ll<nobr> <wbr></nobr>:= new_node<br>&nbsp; &nbsp; | Node(_, next1), (Node(_, next2) as head) -&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; next1<nobr> <wbr></nobr>:= head;<br>&nbsp; &nbsp; &nbsp; &nbsp; ll<nobr> <wbr></nobr>:= new_node<br>&nbsp; &nbsp; | Empty, _ -&gt; raise (Invalid_argument "Empty Argument")<br>&nbsp;<nobr> <wbr></nobr>;;<br> <br>&nbsp; let rec search x ll =<br>&nbsp; &nbsp; match !ll with<br>&nbsp; &nbsp; &nbsp;Empty -&gt; false<br>&nbsp; &nbsp; &nbsp; | Node(y, next) -&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp;if x = y then<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;true<br>&nbsp; &nbsp; &nbsp; &nbsp;else<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;search x next<br>&nbsp; &nbsp;;;<br>end;;</tt></p></div> </blockquote><p>What you should notice here is that there is <b>code</b> here. Basically, any code that can be written without a reference to the type of thing being written should be put here. You should notice the LinkedList.t in the type expression at the top of the module. (LinkedList : S) means that LinkedList <b>is</b> of type S so has a different type than Make's type t (remember Make is itself a kind of module).</p><p>Now, we want to make LinkedList.t a concrete type. We do this by applying the Make functor to the module that we want to apply that functor to. So basically we want to apply the module type S to the module that we will create and attach the code in Make to that module. So, let's do that:</p><p> <code> module IntLinkedList = Make(struct type t = int end);; </code> </p><p>See the struct inside the Make? That is your module that you want to attach the code in Make to. Because the module is of type S you must now declare what type t is. This is also the case for <b>ANY</b> functions in module type S. You can actually define the module else where with a name and pass that name into Make.</p><p>Now, when you put this into the top level of Ocaml, you should get this signature:</p><blockquote><div><p> <tt>module IntLinkedList<nobr> <wbr></nobr>:<br>&nbsp; sig<br>&nbsp; &nbsp; type t = Empty | Node of int * t ref<br>&nbsp; &nbsp; val make : unit -&gt; t ref<br>&nbsp; &nbsp; val insert : int -&gt; t ref -&gt; unit<br>&nbsp; &nbsp; val search : int -&gt; t ref -&gt; bool<br>&nbsp; end</tt></p></div> </blockquote><p>Note how the line in Make: <code>type t = Empty | Node of LinkedList.t * t ref</code> has been replaced by the type that you specified in the call to Make. This means that this is now a Module that creates Linked Lists of Ints.</p><p>You can now stack Functor created modules like so:</p><blockquote><div><p> <tt>module IntSet = Set.Make(struct type t = int let compare = compare end);;<br>module LinkedListIntSet = Make(IntSet);;<br> <br>module IntSetLinkedList<nobr> <wbr></nobr>:<br>&nbsp; sig<br>&nbsp; &nbsp; type t = Make(IntSet).t = Empty | Node of IntSet.t * t ref<br>&nbsp; &nbsp; val make : unit -&gt; t ref<br>&nbsp; &nbsp; val insert : IntSet.t -&gt; t ref -&gt; unit<br>&nbsp; &nbsp; val search : IntSet.t -&gt; t ref -&gt; bool<br>&nbsp; end</tt></p></div> </blockquote><p>Now, you may be wondering to yourself: why has he gone through all this trouble when he could have just used type parameters? That's the canonical way to write linked lists in Ocaml. It is just as type safe and probably faster. However, I can use a sledge hammer to crack a nut. This small example, however, shows all the steps that you need to go through to write a functor in the first instance. This is also small enough to understand. I hope it helps those out there grappling with the idea of functors.</p> cyocum 2010-05-21T19:31:07+00:00 journal LaTeX, the Humanities, and PDF commenting http://use.perl.org/~cyocum/journal/39463?from=rss <p>One of the biggest problems when working with Humanities scholars (or scholars in other areas) is that they are accustomed to using the commenting features of their favored format (usually Word). Now, PDF has the facility to allow commenting. The only problem is that Adobe keeps the keys to this particular kingdom pretty tight. No open source PDF thing (other than <a href="http://sourceforge.net/projects/pdfedit/">PDFEdit</a>, which isn't very stable from what I have read around the web) can add comments to a PDF in a graphical environment.</p><p>This brings us to closed source but free products that allow you to add comments graphically to a PDF. There are two that I know of and both work on Windows: <a href="http://www.foxitsoftware.com/pdf/reader/">Foxit Reader</a> and <a href="http://www.docu-track.com/home/prod_user/PDF-XChange_Tools/pdfx_viewer">PDF-XChange Viewer</a>. Foxit has a free Linux version but it doesn't allow commenting yet. However, Foxit does work under Wine but I have not tried it yet.</p><p>This takes down one more barrier to adopting LaTeX in the humanities if you can get your University or supervisor to support an non-Adobe PDF reader.</p><p>I cannot wait for something like <a href="http://www.gnupdf.org/Juggler">GNU Juggler</a> or maybe something using <a href="http://podofo.sourceforge.net/">PoDoFo</a> library to allow real commenting on PDFs.</p> cyocum 2009-08-14T22:24:16+00:00 journal Scholarly Citation in a Digital World: Some Thoughts http://use.perl.org/~cyocum/journal/38450?from=rss <p>Amazon has released the newest version of the <a href="http://en.wikipedia.org/wiki/Kindle">Kindle</a>. This event has caused me to re-evaluate the relationship of <a href="http://en.wikipedia.org/wiki/Humanities">Humanities</a> scholarship and its most basic and technical part, to wit, citation. Citation is the bedrock upon which scholars in the Humanities (and the Sciences) build upon and comment on each other's work in publication. It is also the source of much of scholarship's tedium. While thinking about the Kindle and the way in which the system works (it seems that the inner format is a limited form of HTML), I came to the conclusion that citation will become a great battle ground in the future of scholarship. The fundamental problem is, simply stated, this: HTML does not guarantee the placement of a particular piece of text anywhere in the document or on the screen.</p><p>The foundation of citation is the page number or, even, the concept of the page. This is, of course, taken from the idea of a book. This, however, does not hold within the realm of markup languages. The renderer of a markup page is generally allowed great freedom to present information on the computer screen. As the idea of a page is broken down by this and the fact that the Kindle and other ebook readers do not have a standard page size, the technicalities of citation in scholarship will take greater precidence than before.</p><p>To forestall much criticism, the idea of the document fragment as part of the <a href="http://en.wikipedia.org/wiki/SGML">SGML</a> and <a href="http://en.wikipedia.org/wiki/HTML">HTML</a> standards is not specific enough of a tool for what it is worth. This is especially true with handcrafted HTML which is created by a not so technically inclined scholar who might forget to put the appropriate anchor tags in the proper places in their text. In terms of <a href="http://en.wikipedia.org/wiki/XHTML">XHTML</a>, this might be overcome by the use of the "id" attribute and <a href="http://en.wikipedia.org/wiki/XPath">XPath</a> to create a new form of link that would allow a reader to link directly to a specific paragraph by its "id" attribute. In more general terms, the <a href="http://en.wikipedia.org/wiki/XML">XML</a> standard <a href="http://en.wikipedia.org/wiki/Xlink">XLink</a>, which is sadly not widely implemented, might allow this. In terms of <a href="http://en.wikipedia.org/wiki/Pdf">PDF</a>, as it is an electronic facsimile of a book, the concept of the page is still useful in citation and does not cause much difficultly in this regard.</p><p>Whither then the citation? While some citation book have added electronic citation to their standards, they tend to use full URLs and date accessed (see MLA for an example). This is inadequate when many in the developed world are moving to devices like the Kindle and the concept of a page becomes much more nebulous. This also effects the citation of electronic resources like <a href="http://arxiv.org/">arxiv.org</a>, which I feel is the future model for the online scholarly journals. While use of PDF with its page numbers may be satisfactory in the PDF realm, when scholarly publishers move to more fluid models of text presentation and digital only publication, the situation in the citation of these resources will be difficult. One way around this would be the ubiquitous use of <a href="http://en.wikipedia.org/wiki/Digital_object_identifier">DOI</a>, but this does not solve the underlying issue of how to cite specific parts of the text. In the end, there are no easy answers but a discussion must take place and a forum created where solutions can be proposed.</p><p>I would be very happy to hear any ideas about how one might solve this puzzle.</p> cyocum 2009-02-10T20:35:06+00:00 journal Perl6 Lives! http://use.perl.org/~cyocum/journal/38044?from=rss <p>Ok, this has probably been done by someone else who is better than me elsewhere but I just wanted to show off some of Perl6 and the fact that you can code in Perl6 and that is just damn cool.</p><blockquote><div><p> <tt>use v6;<br> <br>say factorial(10);<br> <br>sub factorial(Int $int) {<br>&nbsp; my $fac_times = sub(Int $n, Int $acc) {<br>&nbsp; &nbsp; if($n == 0) {<br>&nbsp; &nbsp; &nbsp; return $acc;<br>&nbsp; &nbsp; }<br> <br>&nbsp; &nbsp; return $fac_times($n - 1, $acc * $n);<br>&nbsp; };<br> <br>&nbsp; die "Wrong argument!!" if $int &lt; 0;<br>&nbsp; return $fac_times($int, 1);<br>}</tt></p></div> </blockquote><p>What this code does is translate the code from the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Tail_recursion">tail recursion</a> from Scheme to Perl6 for the factorial function. It even runs pretty well on an unoptimized build of parrot+rakudo. Thank you Parrot/Perl6 People!</p><p> <b>update:</b> I added the die and removed the useless "else" in the inner sub.</p> cyocum 2008-12-07T14:45:07+00:00 journal Latex + verse package note http://use.perl.org/~cyocum/journal/37793?from=rss <p>When using the verse package in Latex, remember that if you put a square bracket right after a \\[ret] (which appears at the end of a verse line except the last), you will get an error. To get around that error, put the \null command just before the square bracket. I have no idea why it does that or why it works with the no-op \null there but it does.</p> cyocum 2008-11-03T12:41:29+00:00 journal Jounal Article Database? http://use.perl.org/~cyocum/journal/37435?from=rss <p>One of the neat things about <a href="http://books.google.com/">Google Books</a> is that I can keep track of my library digitally (I can do the same with <a href="http://www.librarything.com/">Library Thing</a> as well). One thing I would really love to have is a similar system for my journal articles. The main reasons for this is the fact that I have a large number of journal articles photocopied and it is kind of difficult to keep track of them (other than the fact that I keep them in huge pile rather than trying to sort them). If anyone knows of such a thing, please let me know.</p><p>What would be <i>really</i> neat would be to have a system that hooked into Bibtex so that it would automatically add a hyperref link to the article online rather than just to the bibliography at the end of your book/article.</p> cyocum 2008-09-13T15:01:57+00:00 journal Another Note about LaTeX and Comments in the Humanities http://use.perl.org/~cyocum/journal/35969?from=rss <p>As many of you know, I have <a href="http://use.perl.org/~cyocum/journal/33228">discussed</a> this problem before. I have not found a general solution yet but I wanted to highlight one other problem that has manifested recently. When I get comments back from my supervisors, I have noticed that they either reference their comments by section number or page. Those that have worked with LaTeX know that the section numbering is automatically generated and you do not know the page number of something until you have compiled it to its final form. In this case, I generally lean on EMACS' search function to find where I should change something.</p><p>I am now very near the end of my PhD so it is less of a problem than before. However, I have a friend in Education to whom I taught LaTeX but she had to give it up because all of her professors use the comment features of Word to give feedback. I think that if LaTeX or PDF had an easy (and non-expensive) method for obtaining feedback, the Humanities might be more willing to give up its Word habit.</p><p>This causes me to wonder how people in Computer Science and Mathematics make comments on a LaTeX produced paper? Hopefully, I will come up with some kind of solution when I have more time to think about it.</p> cyocum 2008-03-24T13:36:49+00:00 journal Reading List Managment http://use.perl.org/~cyocum/journal/35872?from=rss <p>Well, as many of you know from your own experience, having a reading list for your research can get a bit tedious. At first, I just had a plain text file with the title of the book and the shelfmark number for my university's library. The problem with this solution is that I had to manually erase stuff as I read it and I had duplicate entries because it was getting pretty large. In addition, my university is moving from the Dewey Decimal System to the Library of Congress System which means that my selfmark numbers sometimes go out of date and I had to go look the item up again. So, I decided that it was time to get my computer to manage the list for me. This way I could reduce duplicate entries and I could write a way of picking random books to read. One other goal was to integrate journal articles in the list as well.</p><p>At first, I thought that something like <a href="http://www.librarything.com/">LibraryThing</a> might be the easiest solution. While it was fairly easy to get book information, it did not allow me to enter journal article information or other information that might be of interest to scholarly users. So, after playing around with it, I decided that writing my own would be the best idea and allow me to flex my programming muscles again.</p><p>The first problem that I thought about was file format for the list. As it is just a list of hashes that will be stored in an array, I first thought I would use something like <a href="http://search.cpan.org/~ingy/YAML-LibYAML-0.26/lib/YAML/XS.pm">YAML::XS</a>. It compiled fine on my system (AMD64) so I thought it would work (come to find out it segfaults on large data structures and I had to move to YAML::Syck which worked perfectly; I will do some more investigation before filing a bug).</p><p>With the file format out of the way, I had to think about data entry. I hate data entry. So, I did a quick search for something that would allow me to interface with the Library of Congress or the British Library. Well, I discovered a module called <a href="http://search.cpan.org/~mirk/Net-Z3950-ZOOM-1.21/lib/ZOOM.pod">ZOOM</a> which implements the Z39.50 protocol for library information. As I am using Ubuntu, I thought that I could install it fairly easily. Nope, Ubuntu has an old version of the YAZ library which ZOOM depends on and does not work with the newest version of ZOOM so I downloaded the library source and compiled it myself. ZOOM then installed perfectly and it works. That is one thing I really love about using Linux.</p><p>One of the problems here again is the MARC21 format which is what the Library of Congress spits out on a successful ISBN search of their database. The main hurdle is that it is difficult (or I do not know enough about the format) to determine author vs. editor of a book. From the <a href="http://www.loc.gov/marc/umb/">documentation</a> for the MARC21 format, it seems that the author could be tag 100, 110, or 111 and the editor could be tag 700 or not; I am not sure. So I have some code to look at each of these tags, using <a href="http://search.cpan.org/~mikery/MARC-Record-2.0.0/lib/MARC/Record.pm">Marc::Record</a>, such that I can get everyone in the output correctly (and even then I get it wrong sometimes). I also looked at <a href="http://en.wikipedia.org/wiki/Dublin_Core">Dublin Core Metadata</a>, which is in XML and can be produced by the Library of Congress Z39.50 gateway. I had a very similar problem (again it could be that I do not know the format or that I am being an idiot) as there is no tag for author or editor just a "creator" tag, which is fine but I would really like to know if the "creator" is an author or an editor of a book.</p><p>Otherwise, it works fine (I love getting the correct Library of Congress Call Number as well as a good Dewey one). One of my last problems is that there seem to be no one metadata storage place for scholarly journals. I can imagine one fairly easily. There are two competing standards <a href="http://en.wikipedia.org/wiki/Digital_object_identifier">Digital Object Identifier</a> (DOI) and <a href="http://en.wikipedia.org/wiki/SICI">Serial Item and Contribution Identifier</a> (SICI). JSTOR supports both DOI and SICI but as JSTOR does not cover my discipline very well, it is a bit useless. For now, I have to enter the information manually which is a pain as I like to just copy and paste a number then have the information automatically inserted into the list.</p><p>Also, if anyone knows anything about the MARC21/Dublin Core formats and could give me some pointers (or show me where I am being stupid), that would be most appreciated. Also, if anyone knows a metadata repository for scholarly material in journals that is fairly comprehensive, that would be most helpful.</p> cyocum 2008-03-10T14:00:33+00:00 journal Microsoft and Yahoo http://use.perl.org/~cyocum/journal/35559?from=rss <p>I was reading about the merger proposal by <a href="http://seattlepi.nwsource.com/business/349683_msftyahoo02.html">Microsoft to Yahoo</a>. I am rather firmly opposed to it. When Gates likes to talk about "innovation", it rings rather hollow when Microsoft does not "innovate"; it buys someone in the space they want to enter. Honestly, I would rather see them continue their own search engine than buy Yahoo. Can they not compete by writing their own? I am rather hoping that they get turned down by the various regulatory bodies or by Yahoo's own shareholders.</p> cyocum 2008-02-02T12:19:21+00:00 journal On Learning Lisp http://use.perl.org/~cyocum/journal/35120?from=rss <p>Lisp as always been one of those languages which causes many strong emotions, love or loathing. For me, it has always been one of those mythical languages which only exceptionally smart people use (either in academic computer science or elsewhere). I have tried a couple of times to learn the language but I finally found a book (<a href="http://www.gigamonkeys.com/book/">Practical Common Lisp</a>) that explains it in a way that I can understand it.</p><p>One of the most hated thing about Lisp is the use of parentheses for delimiting constructs and a complete lack of syntax. On the other hand, the use of macros (both reader macros and normal macros) allows for completely redefining the language at runtime.</p><p>I found that the human mind is a very flexible thing if given the chance. The use of parentheses , while a first intimidating, is only a hindrance if you allow it to be (or if you are looking for a reason not to use Lisp) as you continue using the language the parentheses tend to fade into the background as you marshal your functions into the right forms.</p><p>While I am still learning the language (I am doing toy-like programming, like reading the riff-header from wav files and writing small macros to work with CLOS), I still have much to learn about functional programming and the Lisp/Scheme style of programming in general. This does not mean that I will be giving up Perl any time soon but I hope it will teach me a few new tricks that I can use in other programming situations.</p> cyocum 2007-12-17T14:37:00+00:00 journal My Review of Ubuntu 7.10 (Gutsy Gibbon) http://use.perl.org/~cyocum/journal/34791?from=rss <p>I have now used Ubuntu as my primary desktop for seven days so I thought I would write down my initial impressions of it. First, I would have to say that I can do everything that I could do in Windows on Ubuntu as long as it is not hardcore gaming. Using the Add/Remove software applet has been pretty much all that I have needed to use. Although, I have to say that installing LaTeX proved to be a bit daunting as I had to use synaptic package manager to install it rather than the Add/Remove (auctex was similar). Otherwise, it was painless to install new software. It would also be nice to have a package installation for the Miktex Package Manager, which is fantastic even on linux to install LaTeX styles and other LaTeX stuff.</p><p>Now, I will have to say that I bought a Dell with a X1300 Pro from ATI, which was probably a mistake on my part but now that ATI is releasing linux drivers, I think it might turn around. In any case, once I had installed the OS on the hard drive, Ubuntu said that there restricted drivers that I could use for my hardware. So, I decided that I would try them out. After clicking on a few things, which seemed to install them, I rebooted the machine. Then I got what is called the "black screen of death" on reboot. Come to find out there is a file on my machine that black lists restricted drivers and I had to modify this file to allow my machine to boot into X. While I was able to do that from the command line (my FreeBSD experience was definitely a help here), I am not sure what a new person would think of this. Anyway, I decided to spend most of the day installing the new ATI drivers from AMD. My efforts bore great fruit (I now have 3d effects on my desktop and other goodies) but it left me feeling that if I were a "normal" user and I got the "black screen of death", I would be less inclined to use Ubuntu in the future.</p><p>Otherwise, I have to admit that now I do not have to go through hoops as on Windows to get my favorite software to work. As I said, I can do anything that a windows user can do and I get the benefit of having a system that will nearly always works. The only thing that I miss right now is the gaming aspect of my computer. I installed the AMD64 version and I am not sure if any of my old video games will work under Wine emulation. Hopefully, in the near-ish future, games will begin to appear for my machine and I can start to take advantage of my computer for entertainment again.</p><p>I would definitely recommend Ubuntu for the savvy computer user who is not afraid to dig a bit to get the gold. I have a couple of people that I am going to install Ubuntu on their computers in the near future and I hope that this distro will keep on doing the good work.</p> cyocum 2007-10-30T19:39:53+00:00 journal Kicking the Microsoft Habit http://use.perl.org/~cyocum/journal/34746?from=rss <p>Well, I finally did it. After years of promising myself that I would kick my Microsoft habit, I did it. It would be honest to say that it was in a fit of pique but I did it all the same. I went for Ubuntu 7.10 (amd64) and I am still getting comfortable in my new computing home. Hopefully, I will not have the "OMG What Have I Done" feeling tomorrow morning.</p> cyocum 2007-10-24T22:11:35+00:00 journal Tinkering with BibTeX, Perl, and Yaml http://use.perl.org/~cyocum/journal/34025?from=rss <p>After working with BibTeX for my thesis, I was looking into the style language which comes with it because, being in the Humanities, we have some fairly complex needs when it comes to referencing. This is especially true in the history based disciplines (i.e. Chicago in the States and MHRA in the UK). At this point, the jurabib package seems to fit the bill as it does generally everything but if you look at the code which underpins it, it is generally difficult to understand (and my latex-foo is still very weak). I also took a look at the style language which comes with BibTeX and I did not like what I saw. Reverse Polish Notation works great for stack based languages but it does not work well as a method for humans to describe something to a computer. Not only that but BibTeX has a fixed memory structure (bibtex8 fixes some of this but it still has an upper limit).</p><p>What I wanted was something that was specific and easy to understand. Less code is generally a good way for having less bugs (although, not always). At the same time, I had a bad reaction to the style language; I did not like it at all. So, I figured that I could put something better together with Perl.</p><p>My first stop was CPAN in which I found <a href="http://search.cpan.org/~ambs/Text-BibTeX-0.37/BibTeX.pm">Text::BibTex</a>. While this is fantastic work, it relies on a C library and is thus not completely portable. From the documentation of that module, I began to understand some of the edge cases which comes with the BibTeX file format. I began to wonder if another format would be beneficial but I hacked together a version in Parse::RecDescent which kind-of works (it does not handle @string's and @comment's just yet). After working on a Parse::RecDescent grammar, I thought that a file format that I did not have to support a parser for would be better than fighting with BibTeX's file format. This is when I though of using YAML as the basis for my BibTeX replacement as it is easily readable by people and by computers.</p><p>With these things in mind, I wrote a bit of code to read a YAML based format for bibilographic information. I ended up with something that looks like this (I have not fully figured it out just yet so it might change based on experience):</p><blockquote><div><p> <tt>---<br>preamble:<br>&nbsp; string:<br>&nbsp; &nbsp; JIES: The Journal of Indo-European Studies<br> <br>Ahyan2004:<br>&nbsp; type: article<br>&nbsp; author: Ahyan, Stepan<br>&nbsp; title: The Hero, the Woman, and the Impregnable Stronghold: A Model<br>&nbsp; year: 2004<br>&nbsp; volume: 32<br>&nbsp; pages: 79-100</tt></p></div> </blockquote><p>This is read by the YAML module and handed back as a hash reference which works very well for the purpose and I don't have to maintain the file format (yay!). Once this is parsed then what?</p><p>Well, the way in which BibTeX works is that it reads LaTeX's<nobr> <wbr></nobr>.aux file looking for certain tags (\citation{Ahyan2004} for example) and tags to tell it where the bibliographic database is and where the style file is. I wanted to replace the style file with a Perl solution. So, I wrote a class called BibStyle::Style which is the base class for all styles in this scheme. It will also provide some of the basic stuff to make sytles (like adding italics and wrapping the entire entry in the appropriate \bibitem, which goes into the<nobr> <wbr></nobr>.bbl file to be read by LaTeX on the second run). The only problem is that I don't know at run time if someone has written a proper style class so I have to use eval "require $class;";, which I am not very happy about (if anyone has an idea of how to write a plug-in style without that, please feel free to enlighten me). I then call the method style_entry with the key and a hash ref to the contents. The class then does whatever it needs to do to style. It passes back a styled entry for the \bibitem in the<nobr> <wbr></nobr>.bbl file. Here is where there are a few problems.</p><p>The bibitem command takes one or optionally two arguments the one in the [] is the label which is inserted when you call \cite{key}. This is exactly the same every time that you call it so you have to redefine cite so that if it sees the same key as last time it will input ibid instead of the same label over again, which is possible but looks bad and is prohibited in MHRA style. I am investigating some LaTeX-foo tricks to allow ibid style.</p><p>I have not yet implemented the sort function on the bib data but it should not be terribly difficult, although, I have to watch out for the backslash style commands which put accents and stuff on letters. I think I can do most of the common ones and map them to their unicode equivalents and sort on that.</p><p>Right now the code is not in any state to release. Honestly, it is in the tinkering stage but I wanted to show that it is indeed possible to replace BibTeX with something else. In fact, it seems pretty easy, at this point, to replace it with styles that are commonly used in the sciences and the parenthetical styles used in Harvard and MLA. The Chicago, Oxford, and MHRA styles will need more LaTeX intervention to work correctly. My Parse::RecDescent version might hang around as a translator if (and a big if at that) I release the code.</p> cyocum 2007-08-07T12:25:36+00:00 journal MMORPGS, Programming, and Magic http://use.perl.org/~cyocum/journal/33870?from=rss <p>A thought came to me the other day. I am not an expert on MMORPGs or their development or design but I thought it would be very cool if instead of level griding for more magic or getting little icons to cast spells, you made magic a scripting language within the environment.</p><p>When a player starts the game, you give them no information about how to do magic. They must be interested enough about the hints of magic to quest for it. Even when they do quest for it, you only give them a bit of information at a time in highly obscure language.</p><p>As I said, I am not a designer of these kinds of things but it would be much more interesting than level grinding. I am not sure how you would fit it into a consistent style but it is an intriguing idea.</p> cyocum 2007-07-24T08:58:56+00:00 journal Truly Random Numbers http://use.perl.org/~cyocum/journal/33506?from=rss <p>My friend and I were discussing different role playing game systems. We started discussing the 6000d6 of the Dragonball Z role playing game. He wanted to see them ploted out on a graph. So, I pulled up my favorite statistical programming language R and set to work. After looking at the different random generators, I came across a package for random numbers which draw their numbers from <a href="http://www.random.org/">Random.org</a> which uses atmospheric techniques to generate truly random numbers. Well, I plugged it in and generated 6000d6 very quickly. I then pulled up a histogram of the results and I expected to see the normal distribution (or bell curve). Little did I know that truly random die roles are possible and I had to dig around in Wikipedia to find out.</p><p>Honestly, I think that I made a mistake and that they do fall into the normal distribution but I am not all that well versed in the world of R to force out that information but I thought it extremely cool that I now have a truly random die roller!</p> cyocum 2007-06-12T22:02:24+00:00 journal LaTeX, Editorial Comments, and the Humanities http://use.perl.org/~cyocum/journal/33228?from=rss <p>As one of the few people getting a degree in the Humanities (Celtic Studies) and using LaTeX to write my dissertation, it can be a bit disconcerting when people rant and rave about the problems of Word but seem reluctant to have anything to do with an alternate solution.</p><p>I often ask people to read and comment on my work. This has become a bit of a problem because my friends hate looking at LaTeX code. One of them has gone so far as to blame LaTeX for some of my own writing problems. I have resorted to printing off copies (the Computer Science department has free printing so I use a friend there to print off copies for comments). This is not the best solution by any means and is a waste of paper.</p><p>I was poking around Adobe Acrobat and I found out that there is a comment feature in it. After wandering around the web for a while, I discovered that you have to pay Adobe a mint to get that functionality. It seems that PDF was never meant to be edited directly.</p><p>I am not sure at this point on what to do if I want electronic comments on my dissertation. If I compile into PDF, I can only send hard copy. If I send a straight LaTeX file, I get a continuous whine about it not "looking right" and questions about all the commands. People, it seems to me, have become so sensitive to the look of something that they have completely forgotten about the content. I have thought of using "detex" to strip the LaTeX commands out but I suspect that if I do that then I will get "where are all the footnotes?" questions.</p><p>I am still looking for a good solution to the editorial comments in LaTeX for the humanities problem. I might contact some of the people in the LaTeX community to see if there is a solution out there that I have not been able to get google to spit out.</p> cyocum 2007-05-09T09:47:36+00:00 journal Thoughts about Working with TGE and Transformation Languages http://use.perl.org/~cyocum/journal/32580?from=rss <p>As I am beginning to work with TGE, I am thinking about how best to represent this kind of transformation. PIR is great for now but the language tag keeps leaping out to my eye and I keep wondering about the future of tree transformations.</p><p>There are a couple of examples for transforming trees already avaliable from the mark up language world. In SGML, you can do this with <a href="http://en.wikipedia.org/wiki/DSSSL">DSSSL</a>, which is a scheme like language. I have not worked with it. In XML, you have two <a href="http://en.wikipedia.org/wiki/XSL-FO">XSL:FO</a> and XSLT. From what I have gathered, XSL:FO is for formatting to any other format while <a href="http://en.wikipedia.org/wiki/XSL_Transformations">XSLT</a> is for formatting into another XML representation. These two languages use XML as their base language rather than some other representation like DSSSL.</p><p>It will be interesting to see what the wizards have up their sleeves for the transformation end of the compiler tool chains. I would figure that it would go down the same path as PGE as a Perl6 like syntax. Although, scheme and lisp have much going for them in terseness but I think that it would be easier for a programmer if they representation of the transformation is in a similar langauge as their grammar. Thus, although, I sometimes wonder about using XML as a transformation language, it seems to work out quite well for its purpose.</p> cyocum 2007-03-05T17:35:09+00:00 journal PGE and Basic 1964: an Annotated Journey http://use.perl.org/~cyocum/journal/32484?from=rss <p>I have watched the development of Perl6 and Parrot for some time. When PGE was released, making development of languages for Parrot easier, I was very excited. I have worked previously on an internal web testing language for a well known e-tailer. That was based on antlr and in Java. So, recently, I finally decided to bite the bullet, especally after chromatic's <a href="http://www.oreillynet.com/onlamp/blog/2006/03/inside_parrots_compiler_tools.html">blog post</a> about his experience with parrot's compiler toolchain.</p><p>First, however, I needed a project that was fairly simple but had some hard tricks later on. I decided that I would reach back in time to Dartmouth BASIC from 1964. There is a <a href="http://www.bitsavers.org/pdf/dartmouth/BASIC_Oct64.pdf">manual</a> [warning: PDF] available that does a good job of describing the language.</p><p>After reading this and the <a href="http://www.pmichaud.com/2006/pres/yapc-parsers/start.html">presentation</a> on PGE, I decided to figure out how to use PGE. In this endeavor, I downloaded a recent tar-ball and compiled it on my FreeBSD machine that I used for this kind of stuff.</p><p>The best place to look for code to steal is in the parrot/languages directory. Among the best places to get understanding is in the abc (a bc clone) language and punie (perl 1 in PGE). You will find their grammars in their<nobr> <wbr></nobr>/src directories.</p><p>As for PGE itself, from what I understand, it is a subset of the Perl6 grammar feature that compiles down to PIR for use with Parrot. The mechanism that does this is in parrot/compilers/pge/pgc.pir which will take your grammar and output the PIR to standard out. This PIR is not, at this point (a tar-ball just before parrot 0.4.9), output a ready to run grammar. You need to provide a bit of superstructure. What I did was crib code from abc.pir in<nobr> <wbr></nobr>/parrot/languages/abc/src which looks like this:</p><blockquote><div><p><nobr> <wbr></nobr><tt>.namespace [ 'Basic64' ]<br> <br>.sub '__onload'<nobr> <wbr></nobr>:load<nobr> <wbr></nobr>:init<br>&nbsp; &nbsp; load_bytecode 'PGE.pbc'<br>&nbsp; &nbsp; load_bytecode 'PGE/Text.pbc'<br>&nbsp; &nbsp; load_bytecode 'PGE/Util.pbc'<br>&nbsp; &nbsp; load_bytecode 'PGE/Dumper.pbc'<br>&nbsp; &nbsp; load_bytecode 'Parrot/HLLCompiler.pbc'<br> <br>&nbsp; &nbsp; # import PGE::Util::die into Basic64::Grammar<br>&nbsp; &nbsp; $P0 = get_hll_global ['PGE::Util'], 'die'<br>&nbsp; &nbsp; set_hll_global ['Basic64::Grammar'], 'die', $P0<br> <br>&nbsp; &nbsp; $P0 = new [ 'HLLCompiler' ]<br>&nbsp; &nbsp; $P0.'language'('Basic64')<br>&nbsp; &nbsp; $P0.'parsegrammar'('Basic64::Grammar')<br>.end<br> <br>.sub 'main'<nobr> <wbr></nobr>:main<br>&nbsp; &nbsp;<nobr> <wbr></nobr>.param pmc args<br>&nbsp; &nbsp; $P0 = compreg 'Basic64'<br>&nbsp; &nbsp; $P1 = $P0.'command_line'(args, 'target' =&gt; 'parse')<br>.end<br> <br>.namespace [ 'Basic64::Grammar' ]<br>.include 'basic_gen.pir'</tt></p></div> </blockquote><p>The last line is the most important as this is where you include the output from pgc.pir.</p><p>Once you have that in place, you can start working on your grammar. The rest of this will be me discussing my grammar. Any feedback would be very much appreciated. I have probably made many mistakes (both in code and conceptual) and there are a few things that still need to be done but it seems to parse my, admittedly nonsensical, basic program made up of different BASIC statements.</p><blockquote><div><p> <tt>grammar Basic64::Grammar;</tt></p></div> </blockquote><p>As with the new structure for parsing in Perl6, this declares a new grammar in its own namespace. In a fully functioning Perl6, this would allow it to reside in its own file and it would be an error to declare any other user defined type in that file.</p><blockquote><div><p> <tt>token TOP { ^ &lt;program&gt; $ }</tt></p></div> </blockquote><p>This declares the beginning of the program with a sub-rule (don't worry, we will get to rules and tokens in a moment).</p><blockquote><div><p> <tt>token linenum { &lt;digit&gt;+ }</tt></p></div> </blockquote><p>A token is a sub-rule that defines the very basic structures of your language. Much like a scanner in lex, it defines the bits of your language rather than the structure of your language. This token defines the program line number that is the most recognizable part of every BASIC program.</p><blockquote><div><p> <tt>token term {<br>&nbsp; &nbsp; &nbsp; &nbsp; | $&lt;float&gt;:=[ \d+&lt;dot&gt;\d* | &lt;dot&gt;\d+ ]<br>&nbsp; &nbsp; &nbsp; &nbsp; | $&lt;integer&gt;:=[\d+]<br>&nbsp; &nbsp; &nbsp; &nbsp; | &lt;func&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; | $&lt;indent&gt;:=[&lt;[a..z]&gt;&lt;[_a..z0..9]&gt;*]<br>}</tt></p></div> </blockquote><p>This defines the different kinds of numbers (float and integer), variable names (ident), and built-in functions (func) which can be used in expressions. The strange format basically states "bind this regular expression to this variable that will be used later". In this case "later" means in mathematical expressions which are valid in the language. The "|" is the alternation operator and tells the parsers that it is float or integer or func or ident. The is a convenience token that parses a '.' rather than having to put \. in the grammar itself. The [] is a means of grouping but not a character class. The replaces [] as character classes in Perl6.</p><blockquote><div><p> <tt>token func {<br>&nbsp; &nbsp; &nbsp; &nbsp; &lt;func_name&gt;&lt;'('&gt;&lt;expr&gt;&lt;')'&gt;<br>}</tt></p></div> </blockquote><p>This defines what a function should look it. It probably should be a rule rather than a token because it describes the syntax of the language rather than just the basic bits of the language.</p><blockquote><div><p> <tt>token func_name {<br>&nbsp; &nbsp; &nbsp; &nbsp; sin<br>&nbsp; &nbsp; &nbsp; &nbsp; | cos<br>&nbsp; &nbsp; &nbsp; &nbsp; | tan<br>&nbsp; &nbsp; &nbsp; &nbsp; | atn<br>&nbsp; &nbsp; &nbsp; &nbsp; | exp<br>&nbsp; &nbsp; &nbsp; &nbsp; | abs<br>&nbsp; &nbsp; &nbsp; &nbsp; | log<br>&nbsp; &nbsp; &nbsp; &nbsp; | sqr<br>&nbsp; &nbsp; &nbsp; &nbsp; | rnd<br>&nbsp; &nbsp; &nbsp; &nbsp; | int<br>}</tt></p></div> </blockquote><p>This describes the names of the built-in functions that are provided in BASIC.</p><blockquote><div><p> <tt>rule program { [&lt;statement&gt; &lt;';'&gt;]+ | &lt;error: syntax error&gt; }</tt></p></div> </blockquote><p>A rule is what describes the actual syntax of your language. the declare a non-capturing sub-rule. the means to match exactly that character. In this case, it means that each statement must end with a<nobr> <wbr></nobr>;. I know that this is not good BASIC but to make \n the line end would require that I change (the white space rule which is automatically applied unless redefined by the grammar). I will change it eventually. The [] creates a group with the '+' modifier which means that it must match one or more times. The is essential to the grammar because otherwise, as I found out, parrot will throw a Null PMC error when trying to parse your language.</p><blockquote><div><p> <tt>rule statement {<br>&nbsp; &nbsp; &nbsp; &nbsp; &lt;linenum&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &lt;let_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;dim_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;print_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;goto_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;gosub_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;next_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;next_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;end_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;stop_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;return_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;for_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;if_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | &lt;func_statement&gt;<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ]<br>}</tt></p></div> </blockquote><p>This states that a statement rule has a linenumber token in the beginning and one of these sub-statements and ends in a ; character.</p><blockquote><div><p> <tt>rule let_statement { &lt;'let'&gt; &lt;expr&gt; }<br>rule dim_statement { &lt;'dim'&gt; &lt;ident&gt; &lt;'('&gt; &lt;digit&gt;+ &lt;')'&gt; }<br>rule print_statement { &lt;'print'&gt; &lt;PGE::Text::bracketed: "&gt; }<br>rule goto_statement { &lt;'goto'&gt; &lt;linenum&gt; }<br>rule gosub_statement { &lt;'gosub'&gt; &lt;linenum&gt; }<br>rule next_statement { &lt;'next'&gt; &lt;ident&gt;+ }<br>rule end_statement { &lt;'end'&gt; }<br>rule stop_statement { &lt;'stop'&gt; }<br>rule return_statement { &lt;'return'&gt; }<br>rule for_statement { &lt;'for'&gt; &lt;expr&gt; &lt;'to'&gt; &lt;expr&gt; [&lt;'step'&gt; &lt;expr&gt;]? }<br>rule if_statement { &lt;'if'&gt; &lt;expr&gt; &lt;'then'&gt; &lt;linenum&gt;}<br>rule func_statement { &lt;'def'&gt; &lt;'fn'&gt;&lt;[a..z]&gt;&lt;'('&gt;&lt;ident&gt;+&lt;')'&gt; &lt;'='&gt; &lt;expr&gt;}</tt></p></div> </blockquote><p>These are the sub-statements that create the bulk of BASIC. There are few interesting points here. First, the </p><blockquote><div><p> <tt>&lt;'let'&gt;</tt></p></div> </blockquote><p> means to match exactly those characters in a rule. The is a sub-rule that defines the mathematical expressions in BASIC which I will discuss soon. The is a special phrase in PGE that allows you to create balanced character. For now, print will have balanced " to make a string. PGE::Text::bracketed can also take (), {}, and []. PGE will keep track of different levels of the characters and everything else which massively simplifies writing a grammar. Notice the for_statement, it has an optional part of the rule which is grouped by the [] and augmented by the ?, which means match zero or one time only.</p><blockquote><div><p> <tt>rule 'expr' is optable {<nobr> <wbr></nobr>... }</tt></p></div> </blockquote><p>This creates the operator precedence table that will be used in BASIC for a mathematical expression, as I have discussed above. Again, this massively simplifies writing grammars. Of course, there is more to it than that as you must define what your operators are.</p><blockquote><div><p> <tt>proto 'term:' is precedence('=') is parsed(&amp;term) {<nobr> <wbr></nobr>... }</tt></p></div> </blockquote><p>I am not sure what the proto stands for in Perl6 grammars but it is used here to define operators and their precedence. The 'term:' operator is the top level of the operator table. The precedence adverb tells PGE how each operator should be applied. The adverb 'is parsed' tells PGE to use the token called term that we defined at the beginning of the file. This is why many of the built-in functions of BASIC are defined in the token term section because they can be applied in an expression and take themselves expressions as their argument. The braces are, in grammars generally, used to indicate actions when the parser reaches that spot. I am not sure what the {<nobr> <wbr></nobr>... } is for but each of the operator rules has them in all examples that I have seen so I added them here.</p><blockquote><div><p> <tt>#parens<br>proto 'circumfix:( )'&nbsp; is equiv('term:')<br>&nbsp; &nbsp; is pirop('set')<br>&nbsp; &nbsp; {<nobr> <wbr></nobr>... }</tt></p></div> </blockquote><p>As withe the PGE::Text::bracketed above, the expressions can have circumfix operators which encircle an expression. The adverb 'is equiv' tells PGE that the parens bind as tightly as the highest precedence level, in this case term. If the operator has a PIR equivalent, you can use the adverb 'is pirop' to indicate this.</p><blockquote><div><p> <tt># negation<br>proto 'prefix:-'<br>&nbsp; &nbsp; is looser('term:')<br>&nbsp; &nbsp; is pirop('neg')<br>&nbsp; &nbsp; {<nobr> <wbr></nobr>... }</tt></p></div> </blockquote><p>For the operators to work correctly, you must order them in some way. I took from the abc language and keeping an eye on the C operator precedence table simplified the abc language one. Here you will see the adverb 'is looser' and this indicates the precedence hierarchy. I will append the other operators here for reference. An astute reader will notice that in the rules above the rule does not have an = in its rule but the does. I am not sure what the problem is here (and it is probably my problem not PGE's) as the parses just fine but the does not and seems to need that extra = sign to parse. Another thing that an astute reader will notice is that there are several different types of operators: circumfix, prefix, infix, and postfix. This corresponds to its position in an expression.</p><blockquote><div><p> <tt>## exponentiation<br>proto 'infix:^'<br>&nbsp; &nbsp; is looser('prefix:-')<br>&nbsp; &nbsp; {<nobr> <wbr></nobr>... }<br> <br>#multiply<br>proto 'infix:*'<br>&nbsp; &nbsp; is looser('prefix:-')<br>&nbsp; &nbsp; is pirop('mul')<br>&nbsp; &nbsp; {<nobr> <wbr></nobr>... }<br> <br>#divide<br>proto 'infix:/'<br>&nbsp; &nbsp; is equiv('infix:*')<br>&nbsp; &nbsp; is pirop('div')<br>&nbsp; &nbsp; {<nobr> <wbr></nobr>... }<br> <br>#add<br>proto 'infix:+'<br>&nbsp; &nbsp; &nbsp;is looser('infix:*')<br>&nbsp; &nbsp; &nbsp;is pirop('add')<br>&nbsp; &nbsp; &nbsp;{<nobr> <wbr></nobr>... }<br> <br>#subtract<br>proto 'infix:-'<br>&nbsp; &nbsp; is equiv('infix:+')<br>&nbsp; &nbsp; is pirop('sub')<br>&nbsp; &nbsp; {<nobr> <wbr></nobr>... }<br> <br>#assign<br>proto 'infix:='<br>&nbsp; &nbsp; is looser('infix:+')<br>&nbsp; &nbsp; is assoc('right')<br>&nbsp; &nbsp; is past('assign')<br>&nbsp; &nbsp; is lvalue(1)<br>&nbsp; &nbsp; {<nobr> <wbr></nobr>... }</tt></p></div> </blockquote><p>The last part of the grammar are the relational operators. In the case of BASIC, &gt;,=, and </p><blockquote><div><p> <tt>&lt;&gt;</tt></p></div> </blockquote><p> (meaning not equal). I borrowed this code from punie's grammar almost directly. I am not sure what the adverb 'is assoc' means but I left it in there just in case it broke without it.</p><blockquote><div><p> <tt> #relational operators<br>proto 'infix:&lt;'&nbsp; is looser('infix:=') is assoc('non') {...}<br>proto 'infix:&gt;'&nbsp; is equiv('infix:&lt;')&nbsp; &nbsp;is assoc('non') {...}<br>proto 'infix:&lt;=' is equiv('infix:&lt;')&nbsp; &nbsp;is assoc('non') {...}<br>proto 'infix:&gt;=' is equiv('infix:&lt;')&nbsp; &nbsp;is assoc('non') {...}<br>proto 'infix:&lt;&gt;' is equiv('infix:&lt;')&nbsp; &nbsp;is assoc('non') {...}</tt></p></div> </blockquote><p>The last but not least is compiling you grammar. As you noticed above, there is some superstructure that needs to be in place. What I generally do is compile the grammar and out put to a basic_enc.pir file which I then include at the bottom of my superstructure I then do 'parrot/parrot -o basic.pbc basic.pir' then to run my test file against I do 'parrot/parrot basic.pbc test.bas' which will either dump a valid match object to my screen where I can check to see if it parsed the way I expected or throw a Null PMC error, which means that there is something wrong with my grammar otherwise it will tell me that it failed to parse the source. Currently, my test file looks like this:</p><blockquote><div><p> <tt>##TODO:<br>##Make dim work correctly (ie. 10 dim b(10,10))<br>##Redefine &lt;ws&gt; to deal with \n as line end and REM as comment<br>##Matrix operations<br>##Allow scientific notation numbers<br> <br>10 let foo = 5 + 5;<br>20 dim a(10);<br>30 print "blah";<br>40 goto 10;<br>50 gosub 20;<br>60 next bar;<br>70 end;<br>80 stop;<br>90 return;<br>100 for x = 1 to 5;<br>110 for y = 1 to 10 step 2;<br>120 if z &gt; 5 then 10;<br>130 def fna(blah) = 5 + z;<br>140 let foo = sin(5);<br>150 let foo = -6;<br>160 let foo = (5 + 5);<br>170 let foo = 5^5;<br>180 let foo = 1.2;<br>190 let foo =<nobr> <wbr></nobr>.2;</tt></p></div> </blockquote><p>At this point, my next step would be to look into TGE which is the language to create abstract syntax trees in the parrot compiler tool chain. I am still in the preliminary stages of that particular task and I hope to write a similar post about my experiences with TGE as with PGE. At this point, I would like to thank Patrick R. Michaud and Allison Randal, who has worked very hard on TGE and PGE. They made writing language grammars fun and interesting. With these tools, I could write up and test bits of a grammar at a time and run them against my cooked up code. I would also like to thank everyone who has worked on the ParrotVM as it makes playing with a VM fun. I have a full VM that I can play with to my hearts content. Again, Thank You! I am looking forward to my first really working version of the language.</p><p>I would like to reiterate that I would appreciate any recommendations, suggestions, and criticism. I have a few questions as well. For instance, is the rule in the token term the correct way of writing in built-in functions? Anyway, I will leave this for now and I hope this helps anyone else learning PGE.</p> cyocum 2007-02-23T20:09:42+00:00 journal