I thought I'd share this here, as it's something to trip up the unwary. Posted here because there are many Debian/ Knoppix/ Ubuntu users in the Perl community.
I've had need to use a web based chat client for accessing a vendor's support facility, which needs the Java plugin. Fine, I go to the relevant page, which appears complete with jigsaw icons and the "Install Missing Plugins" button.
Sadly, it's one of those plugins that doesn't install at the click of a button, so I'll list the details of what needs to be done to install it.
First, there's a download page. Choose the second option: "Linux (self-extracting file)". Keep track of where you have saved this, e.g. non-root user's home directory or desktop.
Now, there are several ways of proceeding, but I present one method which gives an upgrade path. For this, you need to have the Debian packages fakeroot and java-package, together with any dependencies installed.
Issue the following commands
fakeroot make-jpkg [jre file you downloaded]
This will prompt you with a "proceed" question, and also present you with the full text of the license, which you need to scroll through and agree to. This results in a debian package, which you then install with:
sudo dpkg -i [package]
Run the command java -version to check that the installation has worked. Now you need to symlink in the plugin, so that Firefox can see it. Plugins are kept in a directory off ~/.mozilla called plugins, which might not exist; create it if it doesn't exist, then place a symlink to the plugin in this directory
Now, you need to restart Firefox, and you should pick up the plugin.
Well, LCH have decided to extend my assignment until June. I also know that they've been having difficulties hiring a permie for my role, which is not surprising given that most people with the technical skills and background would prefer to work in a development role.
Beyond June, my suggestion of offering part time consultancy is being taken seriously. This would be very cool.
I'm feeling much more motivated at LCH, and I feel I am being listened to and taken seriously.
I've been informed that my present assignment at the London Clearing House will end at the beginning of March. This is cool, because I won't be out of a job as a result. This is the beauty of working for a consultancy: you get paid when you're not out on site at a client's, when you're "on the bench". I will have been at LCH for 4 years, come February, and my employers are quite pleased with me, having such a long assignment.
LCH are going to be hiring a permie as a replacement. I've also been tasked with coming up with a job spec. If you're interested, see my posting to the banking.pm list, use the contact details given. There's not much Perl involved in this job; the Perl may wither once I have gone.
Depending on what my employers want me to do next, I may be spending more time in the Perl Community. Also, in the mean time, I'm not particularly motivated to do much for LCH.
Why you might want to do this
Maybe, the perl on the host you are running on, is one you have no control of. It could be that the hosting company or your employer, doesn't allow you root access. Maybe you want a more up to date perl than that provided.
Another reason is to have a "virgin perl" to use with the CPAN testers' YACSmoke programme. There are advantages to using the minimum installation: you'll pick up module dependencies that many smoke testers would miss. This would be completely separate from your "working perl", which has all the modules you need to do your work, and for the sysadmins, and the package installers.
In my case, it's the latter reason. I'm using Debian (Sid actually, and so the perl and packaged modules are pretty up to date), and am setting up two "virgin perls", one with 5.8.8 and one with bleadperl. I've created user accounts specially for the purpose.
I've not tried this on Windows. Maybe it's possible to have a vanilla perl installed elsewhere to
C:\Program files, outside the path for normal operations.
Planning and execution
Decide on where you want to put your perl. In my case, this has involved creating new user accounts. In my case, the perl will live under
Obtain the tarball for the perl you want. Untar this into a build directory. In my case,
/home/perl-base/perl-5.8.8 is fine: I untar as user perl-base in the user's home directory. cd into the top level directory.
We're going to vary from the normal mantra given in the INSTALL document. Run the command
./Configure -de to generate the files config.sh and Policy.sh. If this encounters any problems, BAIL OUT! These would indicate an issue on your O/S platform; most of these have been ironed out over time, but this still could be something worth reporting to p5p via perlbug, especially if you are messing with bleadperl.
We're going to throw away the config.sh, but keep the Policy.sh. Remove config.sh, but edit Policy.sh. Globally replace
/usr/local with your destination install base directory, in my case
/home/perl-base. Uncomment the shell variable lines that you have changed.
./Configure again. In my case I use
-de again as I'm building a perl with default configuration - your mileage may vary.
make test and
make install. You don't need to run the
make install as root.
You may need to edit
perl -V and voilà
I've recently subscribed to http://www.bookmooch.com/, and have been finding a growing community of people with an interest in books.
I've also become involved in the community through the forum mailing list attached to the site. Seeing the high volume on the list prompted me to set up an IRC channel, #bookmooch on irc.freenode.org. I'm running my cronic time bot here, and I am thinking about other bots or plugins, such as ISBN lookup.
Yesterday, I had a nice experience getting a problem with my land line sorted out. Many people have slagged off British Telecom for being inefficient or incompetent, but I feel tha on this occasion they have given me good service.
The problem was the the line had become very noisy, with crackles making dialling let alone conversation impossible. The ADSL was also dropping out intermittently - I could tell that this had started happening from around 16:00 from my IRC scrollback.
As I was still having some broadband connectivity, I decided to try logging the fault through http://www.bt.com/faults. Not only was there a sensible input form with a dropdown select with a list of reasons, but after I logged the call, I got the opportunity to turn on two features:
Redirect incoming calls to my mobile for the duration of the fault
Progress of call sent as text messages to the mobile.
I had a text come through explaining that they had run tests and identified the problem. The following day, while at work, I had a further text saying they were looking at it - I let my team leader know that I might need to dash off home to give the BT engineer access to my flat. At 12:00, I had another text letting me know that the problem had been fixed.
So, full marks to BT for their fault service.
The work on cronic has made me think about using Parse::RecDescent autotrees in modules, as I want to turn the guts of cronic into one or more bot plugins.
The issue is that autotrees have their nodes blessed into packages corresponding to the rule names. To my mind, this sucks for two reasons. Firstly, you're polluting a namespace with no control, and secondly the objects should be brought into existence through a constructor, not blessing.
I toyed with rolling my own trees with an autorule, but this didn't give me what I wanted. I posted a wishlist item to RT, but I don't know when that will get looked at, as I know Damian is a busy man.
I looked briefly at the code in Parse::RecDescent to see if it is patchable, then came up with a better idea. This better idea is to post-process the output from the parser, walking the tree, building properly constructed nodes in a sensible namespace.
Inspired by a future need for the Cogers Society to do online debating, I decided to have a go at writing an IRC bot. This was my first attempt.
I decided on Bot::BasicBot as a start point - I had heard good things about this module. I picked Time::Piece and Parse::RecDescent as two modules to use that I am already familiar with.
I found when developing this that lanching the bot gave two warnings about calls to
new() being deprecated in POE::Component::IRC. I happened to be in correspondence with the module's author later, and prodded jerakeen (author of Bot::BasicBot) via RT. The outcome is that Bot::BasicBot has now been brought up to date and everyone's happy.
The idea is to give cronic fairly simple commands, such as:
cronic: at 12:30 say It's lunch time
cronic: after 5:00 say Your five minutes are up
The bot will ignore any text on the channel for which it has not been directly addressed. The bot will also respond to PRIVMSG, allowing personal reminders.
There's still plenty of things for me to do:
Implement time zones; migrate to DateTime. Work in progress. This has involved me grokking DateTime and associated modules.
Documentation. I need to put up a webpage somewhere with "how to use this bot".
Persistent events in a database. These could include perl monger meeting reminders and people's birthdays.
More commands than just "say". It would be useful for cronic to switch the topic, for example.
The bot is up and running on #bots for those who want to have a play. I'll try and catch any scrollback, but if you want, you can email bug reports to cronic at xemaps.com.
Yesterday evening, I attended an evening social event. This was a follow up to the web frameworks talks in November, with presentations on Catalyst, Django (python) and Ruby on Rails.
I was due to be at this venue in any case, as it was the AGM of the Cogers. I had a smile seeing the sign on the staircase to the minstrel gallery, saying that this area was reserved for London 2.0 RC1, and was reminded of some newbie asking where you could download London.pm from and what it does
There were plenty of Python and Ruby geeks present, but only one Perl guy besides myself. A poor show, because I know that this event was posted and discussed on the London.pm list.
For once, I didn't find myself having to defend or make excuses for Perl. A Python guy asked me about Perl 6, expecting me to be embarrassed; I told him all about Pugs ant the effect it has had on the Perl community, which was news to him. I was also able to update him on Parrot, which he was under the impression was moribund. However, it's probably just pining for the fjords
Even if there hadn't been a Cogers meeting in the same venue, I may well have turned up. I will certainly go to the next one if another such meeting is organised. Come on Perl community. Come on London.pm. Where were you?
In the aftermath of the September 2005 incident, I have been thinking about steps we can take to prevent a reoccurrence, or at least minimise the impact.
As it is, I have lost quite a number of writeups and updates I have made in the last six months.
I have been looking at decentralising the OpenGuides data by using one or more mirror sites, which hold all the data and are kept up to date. I'm using CPAN mirroring as my model.
In terms of how to do this, each page has wiki content and metadata. The wiki text can be retrieved using format=raw, for example:
This was recently implemented by hex (cheers mate!) - though you could previously achieve the same result by using action=edit and scraping the HTML response for the CGI form corresponding to the text.
The metadata is obtained in RDF/XML format using format=rdf. This has highlighted a number of issues, resulting in several RT bug tickets for OpenGuides. It has also resulted in a CPAN module OpenGuides::RDF::Reader - standardising data retrieval from OG, mapping namespace qualified tag names into more directly meaningful hash keys. In principle, these translated hash keys match the values going into the column metadata_type in the metadata table.
The idea is that a guide mirror can pull down new and changed pages, when detected from the RecentChanges RSS feed.
The guide mirror gives us new possibilities, such as having all OG data on one website. This will allow an aggregated search over all the Guides.
The other aspect of this is that the data pulled from another site comes with a hash key "source", containing the URL where the data has come from. This will allow implementation of Creative Commons "Attribution", and will allow a future release of OpenGuides to redirect all requests to edit, to the source website.
Exciting stuff! More to come...