Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

cyocum (7706)

  (email not shown publicly)

An American post-graduate student living in Scotland.

Journal of cyocum (7706)

Tuesday October 30, 2007
02:39 PM

My Review of Ubuntu 7.10 (Gutsy Gibbon)

I have now used Ubuntu as my primary desktop for seven days so I thought I would write down my initial impressions of it. First, I would have to say that I can do everything that I could do in Windows on Ubuntu as long as it is not hardcore gaming. Using the Add/Remove software applet has been pretty much all that I have needed to use. Although, I have to say that installing LaTeX proved to be a bit daunting as I had to use synaptic package manager to install it rather than the Add/Remove (auctex was similar). Otherwise, it was painless to install new software. It would also be nice to have a package installation for the Miktex Package Manager, which is fantastic even on linux to install LaTeX styles and other LaTeX stuff.

Now, I will have to say that I bought a Dell with a X1300 Pro from ATI, which was probably a mistake on my part but now that ATI is releasing linux drivers, I think it might turn around. In any case, once I had installed the OS on the hard drive, Ubuntu said that there restricted drivers that I could use for my hardware. So, I decided that I would try them out. After clicking on a few things, which seemed to install them, I rebooted the machine. Then I got what is called the "black screen of death" on reboot. Come to find out there is a file on my machine that black lists restricted drivers and I had to modify this file to allow my machine to boot into X. While I was able to do that from the command line (my FreeBSD experience was definitely a help here), I am not sure what a new person would think of this. Anyway, I decided to spend most of the day installing the new ATI drivers from AMD. My efforts bore great fruit (I now have 3d effects on my desktop and other goodies) but it left me feeling that if I were a "normal" user and I got the "black screen of death", I would be less inclined to use Ubuntu in the future.

Otherwise, I have to admit that now I do not have to go through hoops as on Windows to get my favorite software to work. As I said, I can do anything that a windows user can do and I get the benefit of having a system that will nearly always works. The only thing that I miss right now is the gaming aspect of my computer. I installed the AMD64 version and I am not sure if any of my old video games will work under Wine emulation. Hopefully, in the near-ish future, games will begin to appear for my machine and I can start to take advantage of my computer for entertainment again.

I would definitely recommend Ubuntu for the savvy computer user who is not afraid to dig a bit to get the gold. I have a couple of people that I am going to install Ubuntu on their computers in the near future and I hope that this distro will keep on doing the good work.

Wednesday October 24, 2007
05:11 PM

Kicking the Microsoft Habit

Well, I finally did it. After years of promising myself that I would kick my Microsoft habit, I did it. It would be honest to say that it was in a fit of pique but I did it all the same. I went for Ubuntu 7.10 (amd64) and I am still getting comfortable in my new computing home. Hopefully, I will not have the "OMG What Have I Done" feeling tomorrow morning.

Tuesday August 07, 2007
07:25 AM

Tinkering with BibTeX, Perl, and Yaml

After working with BibTeX for my thesis, I was looking into the style language which comes with it because, being in the Humanities, we have some fairly complex needs when it comes to referencing. This is especially true in the history based disciplines (i.e. Chicago in the States and MHRA in the UK). At this point, the jurabib package seems to fit the bill as it does generally everything but if you look at the code which underpins it, it is generally difficult to understand (and my latex-foo is still very weak). I also took a look at the style language which comes with BibTeX and I did not like what I saw. Reverse Polish Notation works great for stack based languages but it does not work well as a method for humans to describe something to a computer. Not only that but BibTeX has a fixed memory structure (bibtex8 fixes some of this but it still has an upper limit).

What I wanted was something that was specific and easy to understand. Less code is generally a good way for having less bugs (although, not always). At the same time, I had a bad reaction to the style language; I did not like it at all. So, I figured that I could put something better together with Perl.

My first stop was CPAN in which I found Text::BibTex. While this is fantastic work, it relies on a C library and is thus not completely portable. From the documentation of that module, I began to understand some of the edge cases which comes with the BibTeX file format. I began to wonder if another format would be beneficial but I hacked together a version in Parse::RecDescent which kind-of works (it does not handle @string's and @comment's just yet). After working on a Parse::RecDescent grammar, I thought that a file format that I did not have to support a parser for would be better than fighting with BibTeX's file format. This is when I though of using YAML as the basis for my BibTeX replacement as it is easily readable by people and by computers.

With these things in mind, I wrote a bit of code to read a YAML based format for bibilographic information. I ended up with something that looks like this (I have not fully figured it out just yet so it might change based on experience):

    JIES: The Journal of Indo-European Studies

  type: article
  author: Ahyan, Stepan
  title: The Hero, the Woman, and the Impregnable Stronghold: A Model
  year: 2004
  volume: 32
  pages: 79-100

This is read by the YAML module and handed back as a hash reference which works very well for the purpose and I don't have to maintain the file format (yay!). Once this is parsed then what?

Well, the way in which BibTeX works is that it reads LaTeX's .aux file looking for certain tags (\citation{Ahyan2004} for example) and tags to tell it where the bibliographic database is and where the style file is. I wanted to replace the style file with a Perl solution. So, I wrote a class called BibStyle::Style which is the base class for all styles in this scheme. It will also provide some of the basic stuff to make sytles (like adding italics and wrapping the entire entry in the appropriate \bibitem, which goes into the .bbl file to be read by LaTeX on the second run). The only problem is that I don't know at run time if someone has written a proper style class so I have to use eval "require $class;";, which I am not very happy about (if anyone has an idea of how to write a plug-in style without that, please feel free to enlighten me). I then call the method style_entry with the key and a hash ref to the contents. The class then does whatever it needs to do to style. It passes back a styled entry for the \bibitem in the .bbl file. Here is where there are a few problems.

The bibitem command takes one or optionally two arguments the one in the [] is the label which is inserted when you call \cite{key}. This is exactly the same every time that you call it so you have to redefine cite so that if it sees the same key as last time it will input ibid instead of the same label over again, which is possible but looks bad and is prohibited in MHRA style. I am investigating some LaTeX-foo tricks to allow ibid style.

I have not yet implemented the sort function on the bib data but it should not be terribly difficult, although, I have to watch out for the backslash style commands which put accents and stuff on letters. I think I can do most of the common ones and map them to their unicode equivalents and sort on that.

Right now the code is not in any state to release. Honestly, it is in the tinkering stage but I wanted to show that it is indeed possible to replace BibTeX with something else. In fact, it seems pretty easy, at this point, to replace it with styles that are commonly used in the sciences and the parenthetical styles used in Harvard and MLA. The Chicago, Oxford, and MHRA styles will need more LaTeX intervention to work correctly. My Parse::RecDescent version might hang around as a translator if (and a big if at that) I release the code.

Tuesday July 24, 2007
03:58 AM

MMORPGS, Programming, and Magic

A thought came to me the other day. I am not an expert on MMORPGs or their development or design but I thought it would be very cool if instead of level griding for more magic or getting little icons to cast spells, you made magic a scripting language within the environment.

When a player starts the game, you give them no information about how to do magic. They must be interested enough about the hints of magic to quest for it. Even when they do quest for it, you only give them a bit of information at a time in highly obscure language.

As I said, I am not a designer of these kinds of things but it would be much more interesting than level grinding. I am not sure how you would fit it into a consistent style but it is an intriguing idea.

Tuesday June 12, 2007
05:02 PM

Truly Random Numbers

My friend and I were discussing different role playing game systems. We started discussing the 6000d6 of the Dragonball Z role playing game. He wanted to see them ploted out on a graph. So, I pulled up my favorite statistical programming language R and set to work. After looking at the different random generators, I came across a package for random numbers which draw their numbers from which uses atmospheric techniques to generate truly random numbers. Well, I plugged it in and generated 6000d6 very quickly. I then pulled up a histogram of the results and I expected to see the normal distribution (or bell curve). Little did I know that truly random die roles are possible and I had to dig around in Wikipedia to find out.

Honestly, I think that I made a mistake and that they do fall into the normal distribution but I am not all that well versed in the world of R to force out that information but I thought it extremely cool that I now have a truly random die roller!

Wednesday May 09, 2007
04:47 AM

LaTeX, Editorial Comments, and the Humanities

As one of the few people getting a degree in the Humanities (Celtic Studies) and using LaTeX to write my dissertation, it can be a bit disconcerting when people rant and rave about the problems of Word but seem reluctant to have anything to do with an alternate solution.

I often ask people to read and comment on my work. This has become a bit of a problem because my friends hate looking at LaTeX code. One of them has gone so far as to blame LaTeX for some of my own writing problems. I have resorted to printing off copies (the Computer Science department has free printing so I use a friend there to print off copies for comments). This is not the best solution by any means and is a waste of paper.

I was poking around Adobe Acrobat and I found out that there is a comment feature in it. After wandering around the web for a while, I discovered that you have to pay Adobe a mint to get that functionality. It seems that PDF was never meant to be edited directly.

I am not sure at this point on what to do if I want electronic comments on my dissertation. If I compile into PDF, I can only send hard copy. If I send a straight LaTeX file, I get a continuous whine about it not "looking right" and questions about all the commands. People, it seems to me, have become so sensitive to the look of something that they have completely forgotten about the content. I have thought of using "detex" to strip the LaTeX commands out but I suspect that if I do that then I will get "where are all the footnotes?" questions.

I am still looking for a good solution to the editorial comments in LaTeX for the humanities problem. I might contact some of the people in the LaTeX community to see if there is a solution out there that I have not been able to get google to spit out.

Monday March 05, 2007
12:35 PM

Thoughts about Working with TGE and Transformation Languages

As I am beginning to work with TGE, I am thinking about how best to represent this kind of transformation. PIR is great for now but the language tag keeps leaping out to my eye and I keep wondering about the future of tree transformations.

There are a couple of examples for transforming trees already avaliable from the mark up language world. In SGML, you can do this with DSSSL, which is a scheme like language. I have not worked with it. In XML, you have two XSL:FO and XSLT. From what I have gathered, XSL:FO is for formatting to any other format while XSLT is for formatting into another XML representation. These two languages use XML as their base language rather than some other representation like DSSSL.

It will be interesting to see what the wizards have up their sleeves for the transformation end of the compiler tool chains. I would figure that it would go down the same path as PGE as a Perl6 like syntax. Although, scheme and lisp have much going for them in terseness but I think that it would be easier for a programmer if they representation of the transformation is in a similar langauge as their grammar. Thus, although, I sometimes wonder about using XML as a transformation language, it seems to work out quite well for its purpose.

Friday February 23, 2007
03:09 PM

PGE and Basic 1964: an Annotated Journey

I have watched the development of Perl6 and Parrot for some time. When PGE was released, making development of languages for Parrot easier, I was very excited. I have worked previously on an internal web testing language for a well known e-tailer. That was based on antlr and in Java. So, recently, I finally decided to bite the bullet, especally after chromatic's blog post about his experience with parrot's compiler toolchain.

First, however, I needed a project that was fairly simple but had some hard tricks later on. I decided that I would reach back in time to Dartmouth BASIC from 1964. There is a manual [warning: PDF] available that does a good job of describing the language.

After reading this and the presentation on PGE, I decided to figure out how to use PGE. In this endeavor, I downloaded a recent tar-ball and compiled it on my FreeBSD machine that I used for this kind of stuff.

The best place to look for code to steal is in the parrot/languages directory. Among the best places to get understanding is in the abc (a bc clone) language and punie (perl 1 in PGE). You will find their grammars in their /src directories.

As for PGE itself, from what I understand, it is a subset of the Perl6 grammar feature that compiles down to PIR for use with Parrot. The mechanism that does this is in parrot/compilers/pge/pgc.pir which will take your grammar and output the PIR to standard out. This PIR is not, at this point (a tar-ball just before parrot 0.4.9), output a ready to run grammar. You need to provide a bit of superstructure. What I did was crib code from abc.pir in /parrot/languages/abc/src which looks like this:

.namespace [ 'Basic64' ]

.sub '__onload' :load :init
    load_bytecode 'PGE.pbc'
    load_bytecode 'PGE/Text.pbc'
    load_bytecode 'PGE/Util.pbc'
    load_bytecode 'PGE/Dumper.pbc'
    load_bytecode 'Parrot/HLLCompiler.pbc'

    # import PGE::Util::die into Basic64::Grammar
    $P0 = get_hll_global ['PGE::Util'], 'die'
    set_hll_global ['Basic64::Grammar'], 'die', $P0

    $P0 = new [ 'HLLCompiler' ]

.sub 'main' :main
    .param pmc args
    $P0 = compreg 'Basic64'
    $P1 = $P0.'command_line'(args, 'target' => 'parse')

.namespace [ 'Basic64::Grammar' ]
.include 'basic_gen.pir'

The last line is the most important as this is where you include the output from pgc.pir.

Once you have that in place, you can start working on your grammar. The rest of this will be me discussing my grammar. Any feedback would be very much appreciated. I have probably made many mistakes (both in code and conceptual) and there are a few things that still need to be done but it seems to parse my, admittedly nonsensical, basic program made up of different BASIC statements.

grammar Basic64::Grammar;

As with the new structure for parsing in Perl6, this declares a new grammar in its own namespace. In a fully functioning Perl6, this would allow it to reside in its own file and it would be an error to declare any other user defined type in that file.

token TOP { ^ <program> $ }

This declares the beginning of the program with a sub-rule (don't worry, we will get to rules and tokens in a moment).

token linenum { <digit>+ }

A token is a sub-rule that defines the very basic structures of your language. Much like a scanner in lex, it defines the bits of your language rather than the structure of your language. This token defines the program line number that is the most recognizable part of every BASIC program.

token term {
        | $<float>:=[ \d+<dot>\d* | <dot>\d+ ]
        | $<integer>:=[\d+]
        | <func>
        | $<indent>:=[<[a..z]><[_a..z0..9]>*]

This defines the different kinds of numbers (float and integer), variable names (ident), and built-in functions (func) which can be used in expressions. The strange format basically states "bind this regular expression to this variable that will be used later". In this case "later" means in mathematical expressions which are valid in the language. The "|" is the alternation operator and tells the parsers that it is float or integer or func or ident. The is a convenience token that parses a '.' rather than having to put \. in the grammar itself. The [] is a means of grouping but not a character class. The replaces [] as character classes in Perl6.

token func {

This defines what a function should look it. It probably should be a rule rather than a token because it describes the syntax of the language rather than just the basic bits of the language.

token func_name {
        | cos
        | tan
        | atn
        | exp
        | abs
        | log
        | sqr
        | rnd
        | int

This describes the names of the built-in functions that are provided in BASIC.

rule program { [<statement> <';'>]+ | <error: syntax error> }

A rule is what describes the actual syntax of your language. the declare a non-capturing sub-rule. the means to match exactly that character. In this case, it means that each statement must end with a ;. I know that this is not good BASIC but to make \n the line end would require that I change (the white space rule which is automatically applied unless redefined by the grammar). I will change it eventually. The [] creates a group with the '+' modifier which means that it must match one or more times. The is essential to the grammar because otherwise, as I found out, parrot will throw a Null PMC error when trying to parse your language.

rule statement {
                        | <dim_statement>
                        | <print_statement>
                        | <goto_statement>
                        | <gosub_statement>
                        | <next_statement>
                        | <next_statement>
                        | <end_statement>
                        | <stop_statement>
                        | <return_statement>
                        | <for_statement>
                        | <if_statement>
                        | <func_statement>

This states that a statement rule has a linenumber token in the beginning and one of these sub-statements and ends in a ; character.

rule let_statement { <'let'> <expr> }
rule dim_statement { <'dim'> <ident> <'('> <digit>+ <')'> }
rule print_statement { <'print'> <PGE::Text::bracketed: "> }
rule goto_statement { <'goto'> <linenum> }
rule gosub_statement { <'gosub'> <linenum> }
rule next_statement { <'next'> <ident>+ }
rule end_statement { <'end'> }
rule stop_statement { <'stop'> }
rule return_statement { <'return'> }
rule for_statement { <'for'> <expr> <'to'> <expr> [<'step'> <expr>]? }
rule if_statement { <'if'> <expr> <'then'> <linenum>}
rule func_statement { <'def'> <'fn'><[a..z]><'('><ident>+<')'> <'='> <expr>}

These are the sub-statements that create the bulk of BASIC. There are few interesting points here. First, the


means to match exactly those characters in a rule. The is a sub-rule that defines the mathematical expressions in BASIC which I will discuss soon. The is a special phrase in PGE that allows you to create balanced character. For now, print will have balanced " to make a string. PGE::Text::bracketed can also take (), {}, and []. PGE will keep track of different levels of the characters and everything else which massively simplifies writing a grammar. Notice the for_statement, it has an optional part of the rule which is grouped by the [] and augmented by the ?, which means match zero or one time only.

rule 'expr' is optable { ... }

This creates the operator precedence table that will be used in BASIC for a mathematical expression, as I have discussed above. Again, this massively simplifies writing grammars. Of course, there is more to it than that as you must define what your operators are.

proto 'term:' is precedence('=') is parsed(&term) { ... }

I am not sure what the proto stands for in Perl6 grammars but it is used here to define operators and their precedence. The 'term:' operator is the top level of the operator table. The precedence adverb tells PGE how each operator should be applied. The adverb 'is parsed' tells PGE to use the token called term that we defined at the beginning of the file. This is why many of the built-in functions of BASIC are defined in the token term section because they can be applied in an expression and take themselves expressions as their argument. The braces are, in grammars generally, used to indicate actions when the parser reaches that spot. I am not sure what the { ... } is for but each of the operator rules has them in all examples that I have seen so I added them here.

proto 'circumfix:( )'  is equiv('term:')
    is pirop('set')
    { ... }

As withe the PGE::Text::bracketed above, the expressions can have circumfix operators which encircle an expression. The adverb 'is equiv' tells PGE that the parens bind as tightly as the highest precedence level, in this case term. If the operator has a PIR equivalent, you can use the adverb 'is pirop' to indicate this.

# negation
proto 'prefix:-'
    is looser('term:')
    is pirop('neg')
    { ... }

For the operators to work correctly, you must order them in some way. I took from the abc language and keeping an eye on the C operator precedence table simplified the abc language one. Here you will see the adverb 'is looser' and this indicates the precedence hierarchy. I will append the other operators here for reference. An astute reader will notice that in the rules above the rule does not have an = in its rule but the does. I am not sure what the problem is here (and it is probably my problem not PGE's) as the parses just fine but the does not and seems to need that extra = sign to parse. Another thing that an astute reader will notice is that there are several different types of operators: circumfix, prefix, infix, and postfix. This corresponds to its position in an expression.

## exponentiation
proto 'infix:^'
    is looser('prefix:-')
    { ... }

proto 'infix:*'
    is looser('prefix:-')
    is pirop('mul')
    { ... }

proto 'infix:/'
    is equiv('infix:*')
    is pirop('div')
    { ... }

proto 'infix:+'
     is looser('infix:*')
     is pirop('add')
     { ... }

proto 'infix:-'
    is equiv('infix:+')
    is pirop('sub')
    { ... }

proto 'infix:='
    is looser('infix:+')
    is assoc('right')
    is past('assign')
    is lvalue(1)
    { ... }

The last part of the grammar are the relational operators. In the case of BASIC, >,=, and


(meaning not equal). I borrowed this code from punie's grammar almost directly. I am not sure what the adverb 'is assoc' means but I left it in there just in case it broke without it.

#relational operators
proto 'infix:<'  is looser('infix:=') is assoc('non') {...}
proto 'infix:>'  is equiv('infix:<')   is assoc('non') {...}
proto 'infix:<=' is equiv('infix:<')   is assoc('non') {...}
proto 'infix:>=' is equiv('infix:<')   is assoc('non') {...}
proto 'infix:<>' is equiv('infix:<')   is assoc('non') {...}

The last but not least is compiling you grammar. As you noticed above, there is some superstructure that needs to be in place. What I generally do is compile the grammar and out put to a basic_enc.pir file which I then include at the bottom of my superstructure I then do 'parrot/parrot -o basic.pbc basic.pir' then to run my test file against I do 'parrot/parrot basic.pbc test.bas' which will either dump a valid match object to my screen where I can check to see if it parsed the way I expected or throw a Null PMC error, which means that there is something wrong with my grammar otherwise it will tell me that it failed to parse the source. Currently, my test file looks like this:

##Make dim work correctly (ie. 10 dim b(10,10))
##Redefine <ws> to deal with \n as line end and REM as comment
##Matrix operations
##Allow scientific notation numbers

10 let foo = 5 + 5;
20 dim a(10);
30 print "blah";
40 goto 10;
50 gosub 20;
60 next bar;
70 end;
80 stop;
90 return;
100 for x = 1 to 5;
110 for y = 1 to 10 step 2;
120 if z > 5 then 10;
130 def fna(blah) = 5 + z;
140 let foo = sin(5);
150 let foo = -6;
160 let foo = (5 + 5);
170 let foo = 5^5;
180 let foo = 1.2;
190 let foo = .2;

At this point, my next step would be to look into TGE which is the language to create abstract syntax trees in the parrot compiler tool chain. I am still in the preliminary stages of that particular task and I hope to write a similar post about my experiences with TGE as with PGE. At this point, I would like to thank Patrick R. Michaud and Allison Randal, who has worked very hard on TGE and PGE. They made writing language grammars fun and interesting. With these tools, I could write up and test bits of a grammar at a time and run them against my cooked up code. I would also like to thank everyone who has worked on the ParrotVM as it makes playing with a VM fun. I have a full VM that I can play with to my hearts content. Again, Thank You! I am looking forward to my first really working version of the language.

I would like to reiterate that I would appreciate any recommendations, suggestions, and criticism. I have a few questions as well. For instance, is the rule in the token term the correct way of writing in built-in functions? Anyway, I will leave this for now and I hope this helps anyone else learning PGE.