Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

pdcawley (485)

  (email not shown publicly)
AOL IM: pdcawley (Add Buddy, Send Message)

Journal of pdcawley (485)

Wednesday July 17, 2002
04:30 PM

My mother in law died this morning

Pneumonia, and other complications arising from lung cancer.

At least it was quick.

And, last night, I learned that I'm going to be an uncle again.

I think the word is 'bittersweet'.

Thursday July 11, 2002
03:23 PM

A story of the silver age

I was going through my mail archive looking for something completely unrelated when I came across this. It was written for a bunch of whippersnappers on a mailing list at the back end of last year and I'd pretty much forgotten about it. But when I found it again I found myself thinking "Hey, this isn't bad; it deserves a wider audience." So, here you go.

Gather 'round oh my children and let Uncle Piers tell you a story of the days when September only lasted around 50 days. The elder days, before, during and just after The Great Renaming. For I was there.

When I went up to Nottingham I knew nothing about Usenet or the ARPAnet (in fact, in my years at university I never got ARPAnet access, though it was apparently available during my last year there). There was this weird bulletin board system called, if I remember rightly, 'info', which carried mostly local groups, and something called sf-lovers which was a science fiction discussion group and which was far and away the most active.

Then one day in my second term, a second year told me to 'type rn, it's cool'. So I did. And it was (thanks Larry). I read the Frequently Asked Questions lists on net.announce.newusers with a mounting 'wow! this is really cool!' reaction, and off I went to start reading. I didn't actually send my first post to usenet. The 'hundreds, if not thousands of dollars' warning successfully put me off.

Looking back, the only real difference between Usenet then and Usenet now was that there was no commercial spam and the volume was way lower. There were still idiots who'd crosspost to 101 newsgroups, but that was generally ok, so long as they didn't multipost. There were still flaming lusers and splendidly stupid flamewars (generally about Robert Heinlein, at least on net.sf-lovers, later rec.arts.sf-lovers). The mythical, luser free age was already looked back on nostalgically by the Old Farts, but I got the strong feeling that it never really existed. We were (and still are) a cliquey bunch and I'm quite sure that we've always had to deal with people we thought of as assholes.

What has changed since those days? (and, to a lesser extent, since my early days as a demon customer and later as the sysadmin at Frontier). Well, there's been a steady erosion of trust, which has been awful to watch. Back in the day, if a site had access to the 'net you could be reasonably sure that the postmaster at least had a clue, and was probably in a position of some authority over the people who posted from that site. Posting priviledges could be (and sometimes were) removed from abusive users. You could generally rely on people not to abuse the system. Just look at the protocols in use at the time. By
todays standards they seem terribly naive, most of the time they worked by relying on everyone to play nice and not to start lying to machines. And, dammit, for a long time that was all that was needed.

The world changed with Canter and Siegel, and with the Morris Worm. I don't think we realised quite how much C&S's spam changed the world at the time, but looking back, here were people who genuinely didn't care how much they were costing the 'net and who had consciously broken the rules (and in those days they were real rules) about commercial traffic. Trouble is, they could, and probably did, point at all the .com companies for whom the rules had been quietly bent in order to allow them onto ARPAnet and Usenet. But C&S really drove home the point that these rules were just a gentlemen's agreement and totally unenforceable. The RTM worm did something similar, but it pointed out that we weren't in a playground any more. There were Bad Men out there (or in RTM's case, possibly, Naive Men) who didn't care if they broke things.

By the time that the ISPs started going, the world had changed again. The people who gained access via Delphi, Genie, AOL and others were no longer being granted access by some benevolent sysadmin. They were paying for it directly and they were the masters now. They had their own, provider culture and they expected Usenet to fit how they expected the world to work. Look at early posts from these people, and the reaction to them, and you'll see massive communications breakdowns between two very different world views. There was always going to be trouble.

But, on usenet at least, all that really happened was that the volume of traffic exploded yet again, so we got a whole lot more kooks and a whole lot more spammers. And a few of us went off and started the network that shall not be named, which is still going strong, and we also attempted to start usenet2, which failed dismally.

But I still lament the death of trust. It pains me that there are contemptible little shits out there who are prepared to run dDOS attacks, or who are happy to break into dabox simply because it's there. I hate the fact that I've had to become paranoid. I hate having to run spam filters and I really hate that spam still gets through. I hate the fact that my cix email address gets nothing but spam eight years after it was last used to post something to Usenet, that the GOOD NEWS email virus actually exists now albeit under a different name and I despise Microsoft for unapologetically allowing it to happen. I hate the fact that, if I want to reply to someone on usenet by mail now, the odds are good that if I just hit 'r' and send the message then that message will bounce because of address munging.

All that (and a good deal more if I really start ranting) said, the 'net is still a fantastically useful and, for me at least, life enhancing resource. Tools and places like google, imdb, amazon and its brethren, the kgs go server, okbridge, PernMUSH,, The scary devil monastery,, and all those other places where cool people hang out have enriched my life. A couple of years ago Gill and I went to the 'States for a couple of weeks and had a fantastic time staying with people that I'd only met on the net before. They were, without exception, great people who made our holiday so much more enjoyable. Without the 'net, we'd probably still have had a fine holiday, but I doubt we'd've taken the trouble to visit Seattle, which was fantastic, or to then drive from Seattle to LA (which was just stunning, I love the Oregon coast, and coming into San Francisco over the Golden Gate Bridge on a gorgeous, clear evening as the sun was setting is definitely something I'm not going to be forgetting in a hurry).

These are resources that should be shared and that everyone should have the opportunity to use. The loss of trust is, I believe a price worth paying, but you can make your own decisions.

Wednesday July 10, 2002
02:59 PM

Thoughts on music distribution

So, what do we know?

  1. Heavy users of music sharing software buy substantially more records than average.
  2. The Recording Industry is doing everything in its power to shut down music sharing systems.
  3. The 'theft' of music via the likes of Napster is being used as excuse to try and enforce nasty crypto based techniques for 'protecting' IP rights.

Why? The evidence would appear to imply that free access to music actually helps the sales of that music, so surely this is something that the recording industry should be encouraging (or at least, deliberately protesting ineffectually about) so, why aren't they?

Because the music sharers aren't the people they're worried about. Nor are the DivX sharers.

Right now, the record companies are the gateway that artists have to pass through to reach a larger public. They managed to enforce their (collective) monopoly because, until recently, distribution of the physical media has been the hard part. Except now, it isn't.

Tools like Napster had the potential for artists to reach a public without giving up their freedom.

Here's a case in point. I'm in the process of getting the master tapes of a tape my wife and her friend's album transferred to CD. The tape is a capella harmony singing, of mostly traditional material; the potential audience is small. But, had AudioGalaxy still been up, it would have been the work of very few moments to add tags to the MP3s providing information, not just about the songs, but pointers to where to buy a 'real' CD. Okay, we wouldn't see many sales, but any sale that we did see would be one more sale than would otherwise have been made. And the music would be out there, where it can be heard.

As the P2P networks get closed down, this sort of distribution becomes harder. Sure, artists can always publish material on their website, but that rules out serendipity. For instance, I was recently looking for recordings of the traditional song Long Black Veil on AudioGalaxy, and found a whole load of versions, by artists I knew and ones I'd never heard of, many of which were fantastic. I also know I'm unlikely to be buying any Dave Matthews Band albums in the future.

And, once the 'rights managed' systems come online, I don't doubt there'll be pressure to sell hardware that can only play managed content. And where does that leave the artist without a recording contract?

I honestly don't think I'm being paranoid. I don't thing the record industry is actively trying to screw us. It's trying to protect its revenue stream into the future by controlling access to the market. And that's so wrong it makes my teeth hurt.

Friday July 05, 2002
06:47 AM


So, you may or may not know that I've been working on and off on PerlUnit (aka Test::Unit::TestCase), a Perl port of Kent Beck's xUnit.

Now, as these things go, PerlUnit is pretty good, it's a fairly straight port of jUnit, which is a java version of the Smalltalk original, sUnit.

The problem is that it's not really very perlish. You can run PerlUnit tests in such a way that Test::Harness can use the results, but it can be awkward.

Then, along comes Test::Class and it's lovely. Adrian Howard has done a fantastic job of taking the basic ideas behind xUnit and the simple interface of Test::More/Simple that we're used to, and producing a really useful synthesis of the two.

He doesn't try and do as much as PerlUnit, and there are a few surprises if you're used to the xUnit approach where a failing assertion (which is about the granularity of C in Test::More) bails out of the current test method. PerlUnit counts test methods, Test::Class counts (and requires you to, but makes it easy) the tests done within methods. The list continues.

And then, top cap it all, Test::Class is tiny PerlUnit is, frankly, enormous. I'm sure there are times when I'll find myself hankering for some deep feature of PerlUnit, but for now, I'm switching to Test::Class.

Oh yes, and Test::Class plays well with Test::MockObject, which is another top notch testing tool.

Wednesday July 03, 2002
03:25 AM

ActiveState Awards or, "Why I didn't vote for Damian"

As Matts has already noted, the nominations for the ActiveState Award are up. The nominees are Schwern, Matt Sergeant, Michael Peppler, Damian Conway and Bill Luebkert.

Now, grumbles about the voting system aside (there's no way that I, as a Perl programmer, am competent to vote in the other polls, I haven't the faintest idea who any of those people are for heavens' sake.

Now, as a long time supporter of Mr Conway, you'd think I would have immediately stuck my cross next to his name and made a vote. Well... no; I voted for Schwern.

Why? Because as I understood the Award, it is supposed to be for the 'unsung heroes'. And Damian, for all he's a great guy, superb speaker top notch programmer and trainer, isn't exactly unsung. Hell, he has an Award named after him.

Schwern however, is pretty much unsung outside the Perl community (and in some cases within that community), the work he's done hasn't exactly been glamourous, but the testing effort that he's spearheaded is vitally important. Because of Schwern's efforts, the perl core is better tested than it's ever been, and we have better and more immediately accessible tools for writing tests in our own modules. And this is very good indeed.

So, vote Schwern. Vote early. Vote often. Nag Activestate to allow voting in a subset of all the categories.

Tuesday June 25, 2002
03:23 AM

What did I do yesterday


I wrote a Perl 6 Summary and posted it to the perl6-announce, perl6-language and perl6-internals mailing lists. Hopefully it'll end up on at some point. I intend to keep doing it for a while.

And I've been thinking about funding. I'm doing these summaries because I think they're important (I missed 'em when they were gone), I don't want any money for doing it, but it'd be really cool if my doing this work could help fund Larry, Dan & Damian (and whoever gets any grant next year...). Which leads me to wonder if there could be some way of setting up 'accounts' with YAS for this sort of funding. Say you think the Perl 6 summary is worth ten of your hard earned bucks, you could make a donation to YAS saying, in essence "That's for the Perl 6 summaries", and the donations list would reflect that.

You could do similar things for modules. Say you really, really like Andy Wardley's Template Toolkit. Now, if there was a Template Toolkit YAS account, you could donate money under that label.

What do you think?

What else did I do? I got Apache::MP3 up under apache2 and modperl2. The required changes are actually quite small, but they are incompatible with mod_perl 1.xx. Which is annoying. I'm going to see about refactoring so we can either have Apache2::MP3 and Apache::MP3, or just a plain Apache::MP3 that works with both versions of modperl. (Essentially, if you're interested, the changes are use Apache::Const instead of Apache::Constants, and you need to do sub handler : method {...} to have the handler used in an OO fashion.)

Oh yes, I went to the dentist for the third time in as many weeks (or the third time in about 6 years...) and had the first stage of a root canal done. Ouchie. At least it's on the National Health, but it definitely wasn't fun. The tooth is still aching...

Wednesday June 05, 2002
07:46 AM

A puzzle

So, what do you think the following code does?

package Bar;
use overload '""' => sub { "Bar" };
package main;
$a = $b = {};
bless $b, 'Bar';
print $a, "\n";
print $b, "\n";

It's not


which is what I would expect.

Instead we get:


Which isn't exactly great. This is a problem on (at least) perl 5.8.0rc1, 5.6.1 and 5.6.0.

I think this is the same problem as I tripped over when I was working on reblessing objects in Pixie from a class which did have overloads to one which didn't.

Friday May 31, 2002
04:35 AM

Thinking about Object persistence. Again.

So, you may or may not know that I'm working on (yet another) Object persistence tool (with james and acme of this parish.

"An object persistence tool?" You ask, "There's hundreds of 'em, aren't you just reinventing the wheel?"

Well, yes and no. We've looked at a whole heap of tools that are available and none of them fit the bill for what we want. Our design goals have been, in no particular order (I've numbered the list so I can refer back):

  1. Allow the user to throw any kind of object at the database, whether built on a hash, an array, a pseudo hash or any of the other things that one can build objects with.
  2. Don't require any support from the user classes. But provide hooks that, if the user wants to 'help' the persistence framework she can.
  3. Be neutral about the physical storage.
  4. Don't require a schema before anything can be stored.
  5. Keep it Simple. You should be able to store an object and get a cookie back and then use that cookie later to retrive the object. And that is all. Complex indexing and querying should be something that can be built on not something that is built in
  6. Don't pull in the world. If I fetch an object that's at the top of a tree I don't want the entire tree fetching immediately. Unless I explicitly ask for it of course.
  7. Try not to 'save the world'. Don't go saving the same object 100 times. Try and avoid saving objects that haven't changed since they were last saved.

And do you know, it's starting to come together. We're using James' trick with Data::Dumper and bless as a way of walking object trees (which hurts your head the first few times you look at it, but which gets rid of a whole pile of treewalking code). Once you get your head wrapped 'round how it works, it all starts to seem remarkably simple...

So, what do we have working. Part 1 is really, really hard in the general case. For now we can reliably store most 'pure perl' objects that aren't based on CODE refs. Storage is more efficient (ie: We can do deferred fetching tricks) if the classes support an _oid method. Things fall apart (badly) in the case where we have XS classes that use the blessed reference as a key to some perl inaccessible data structure out in C space. The general rule of thumb is that, if an object has state that Data::Dumper can't see, then neither can we. It'd be really cool if Data::Dumper were a little more 'hooky', and checked for, say, '_Dumper' method on all classes and could hand off the responsibilitly for serializing to perl code to those objects that wanted it...

Point 2 is pretty much covered (modulo what was discussed in point 1). Objects can help by providing an _oid method, but we can generally cope if they don't.

Point 3, yup, we do that, we have Pixie::Store::* objects for DBI, BerkeleyDB and Memory (that last is a hangover from early algorithm testing...). The DBI store uses the very lovely DBIx::AnyDBD to cope with database inconsistencies. (If you've not looked at DBIx::AnyDBD and you do anything with databases then I strongly recommend you take a look. Matt Sergeant is a genius.) Currently the only database we have a 'specific' module for is MySQL, but that's just to get round the lack of real transactions by using their atomic REPLACE.

Point 4. Yup, we do that. We not only don't require a schema, we wouldn't know what to do with one if we had it.

Point 5. Yup. It's simple all right.

Point 6. Yup. If an object contains other objects that are stored seperately (objects that have oids) then we don't pull them in immediately, but instead provide a proxy.

Point 7. Sort of. A lot of this falls out as a result of deferred loading. When we store an object we store all the 'real' objects that are 'reachable' from it, but we stop when we reach a proxy object. We could probably be more efficient, but I've not had a hard think about pathological cases yet.

We've now reached the point where I'm thinking of deploying it for an internal application. There's still issues though. Here's a few that are making me think at the moment.

  1. Garbage collection. I'm not about to go doing bad things involving reference counting in this database, that would be bad. But that means I need to think about a mark & sweep approach to deletion, and that, in turn means maintaining a 'root set' of objects. Ah well, it'll probably fall out as 'just another index' once we've added indices.
  2. Indexes. We want to be able to do things like 'get me all the Widget objects' without having to load every single object in the database and do an 'isa' query on it. This means we need some way of adding indexes to the store. Ideally I'd like to be able to either have an index from the beginning, maintained by insertions and deletions or to add one late in the game and have it build itself.
  3. Querying. I'm not a big fan of trying to bend a sql like query language to fit an OO world; I'm more inclined to go with a system of filtering of sets/indices. If you want a full on query language it should be possible to build it on top.
  4. Uniqueness. Not sure it's the right word. I'm talking about the idea that, if you have an object with an oid of, say 'Homer', then there should be one (and only one) object with that oid in memory (and ideally it should be unique across all processes too, but that's just scary.)
  5. Granularity(?). Sometimes you want an object that writes its every change back to the database. We don't do this. But it should be possible to implement something on top of what we have.
  6. Atomicity. We're a hostage to our store for this. It should be possible to insert a complex object into the database (one that points to a bunch of other, sub objects) on an all or nothing basis. If one sub insertion fails then all the others should be rolled back. This should work in the BerkeleyDB store and on DBI backed stores that support transactions, but it'd be really cool if we could support it on non transactional stores too...

"So, where can I find this paragon of perl persistency? And what's it called?"

Well, it's called 'Pixie' (ask Acme), and right now you can't have it. Some of the current tests are adapted from Tangram and are only distributable under the GPL, and we want Pixie to be destributed under the same terms as Perl. So, we need to either get permission to distribute them under a dual license, or get rid of them entirely.

I'll be off to ask for permission as soon as I've hit 'save' on this form, so we should know one way or the other soon.

Sunday May 26, 2002
11:35 AM

And... relax!

Well, we've just handed my 18 month old nephew, Albert back to his parents after looking after him for, ooh... a day and a half.

And we're knackered. He was, of course, utterly wonderful (though the time when he flung himself into my arms from halfway up the stairs was a moment I could have done without) but very, very tiring. How do the parents among you do it?

Tuesday May 21, 2002
10:46 AM

It's the little things that bring happiness

I just found out that my project manager's surname means `Fool'. Now, he's a decent bloke and everything, and good at the job.

But it still brings a big smile to my face knowing that my project manager is a fool.