Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

hanekomu (8123)

hanekomu
  (email not shown publicly)
http://hanekomu.at/blog/
AOL IM: hanekomu (Add Buddy, Send Message)

Go (Baduk) player and Perl hacker.

Journal of hanekomu (8123)

Thursday January 17, 2008
06:35 AM

Catalyst - Accelerating Perl Web Application Development

(The following is a review of the book "Catalyst - Accelerating Perl Web Application Development ".)

Catalyst is a Web development framework that should not need an introduction. Thanks to Catalyst you no longer have to look towards Ruby on Rails if you want to develop Web applications in a flexible, efficient and effective way.

Personally I've stayed away from writing Web applications because it involved a lot of repetition, didn't produce very reusable code and was overall quite boring. Catalyst has changed that for me. It is well designed, encourages reusable code, extendable - everyone likes to write plugins -, and leads to fast application development.

This is the first book on Catalyst. The book author, Jonathan Rockway, is one of the main developers on the Catalyst project, so he knows what he's talking about. He takes you through a series of step-by-step tutorials in which you learn about how to develop ever more ambitious Web applications.

First of all, you learn how to install and set up Catalyst. This is a relatively straightforward process. Catalyst's core concept of MVC (Model-View-Controller) is explained next, and afterwards you're already creating a bare-bones Catalyst application and generating dynamic web pages. By the end of chapter two you've already created a simple database, used it as a model and set up a controller to forward the data to a Template Toolkit-based view. All of this took me less than fifteen minutes to read, write, setup, run, test and sort-of understand.

The next chapter shows you how to build a more ambitious address book application with basic CRUD (create, retrieve, update, delete) functionality, again in a series of easy-to-understand steps. Then you add sessions, authentication and authorization to the address book application.

You also learn how to write your own model classes, how to effectively test your Web application and how to deploy it to your web server. None of this is particularly difficult. Along the way, the author also introduces us to DBIx::Class, a flexible object-relation mapper that is held in high regard within the Perl community.

Catalyst is also Web 2.0 buzzword-compliant. One chapter of the book shows you how to add a REST API, how to interact with AJAX for a more responsive user interface, and how to create an RSS feed for your application's data.

You need to know a fair bit about the underlying technologies such as Perl or the Template Toolkit already; this book is a guide for competent developers to get into the Catalyst state of mind. It does not babysit you through every last step, which, for the record, I think is a good thing.

It is a relatively thin book - based on the table of contents I've read online I would have expected a much heavier volume. But after working through the book, I'm impressed about how many topics are covered in an easy-to-understand format. Like Perl, Catalyst makes easy things easy, and hard things possible. A lot of interesting topics are only touched upon, though; again, you're expected to explore on your own, using Catalyst's own documentation, its tutorials, and the CPAN, where you will find loads of plugins to extend every aspect of Catalyst.

The layout of the book isn't overly inspiring, but it also doesn't get in your way.

To conclude, if you're interested in what Catalyst has to offer, you need to read this book. It contains more information, and in a lot more coherent form, than can be found in Catalyst's own documentation.

Catalyst - Accelerating Perl Web Application Development
Jonathan Rockway
Packt Publishing
ISBN 978-1-847190-95-6
$ 39.99, GBP 24.99

Tuesday December 11, 2007
04:16 PM

dotfiles.org

My dotfiles are at http://dotfiles.org/~hanekomu - they are also part of Dist-Joseki. Thanks to Andy Lester for the tip.

07:35 AM

Babbage's WTF

Proof that that WTF-moment existed even in the 1850s:

"On two occasions, I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question." Charles Babbage (1791-1871)

Saturday November 24, 2007
06:38 AM

App::sync_cpantesters

I've released App::sync_cpantesters. Here is the manpage of bin/sync_cpantesters:

NAME

sync_cpantesters - Sync CPAN testers failure reports to local dir

SYNOPSIS

sync_cpantesters -a MARCEL -d results -v

DESCRIPTION

CPAN testers provide a valuable service. The reports are available on the Web - for example, for CPAN ID MARCEL, the reports are at http://cpantesters.perl.org/author/MARCEL.html. I don't like to read them in the browser and click on each individual failure report. I also don't look at the success reports. I'd rather download the failure reports and read them in my favourite editor, vim. I want to be able to run this program repeatedly and only download new failure reports, as well as delete old ones that no longer appear in the master list - probably because a new version of the distribution in question was uploaded.

If you are in the same position, then this program might be for you.

You need to pass a base directory using the --dir options. For each distribution for which there are failure reports, a directory is created. Each failure report is stored in a file within that subdirectory. The HTML is converted to plain text. For example, at one point in time, I ran the program using:

sync_cpantesters -a MARCEL -d reports

and the directory structure created looked like this:

reports/Aspect-0.12/449224
reports/Attribute-Memoize-0.01/39824
reports/Attribute-Memoize-0.01/71010
reports/Attribute-Overload-0.04/700557
reports/Attribute-TieClasses-0.03/700575
reports/Attribute-Util-1.02/455076
reports/Attribute-Util-1.02/475237
reports/Attribute-Util-1.02/477578
reports/Attribute-Util-1.02/485231
reports/Attribute-Util-1.02/489218
...

COMMAND-LINE OPTIONS

--author <cpanid>, -a <cpanid>

The CPAN ID for which you want to download CPAN testers results. In my case, this id is MARCEL.

You have to use exactly one of --author or --uri.

--uri <uri>, -u <uri>

The URI from which to download the CPAN testers results. It needs to be in the same format as, say, http://cpantesters.perl.org/author/MARCEL.html. You might want to use this option if you've already downloadd the relevant file; in this case, use a file:// URI.

You have to use exactly one of --author or --uri.

--dir <dir>, -d <dir>

The directory you want to download the reports to. This can be a relative or absolute path. This argument is mandatory.

--verbose, -v

Be more verbose.

--help, -h

Show this documentation.

Saturday November 10, 2007
05:59 AM

Carp::Source

I've released Carp::Source. It exports a function, source_cluck(), that does pretty much the same as Carp's cluck() except it also displays the source code context of all call frames, with three lines before and after each call being shown, and the call being highlighted. Enjoy.

04:24 AM

Productive week

The last week has been very productive. I've uploaded about twenty new distributions and made updates to even more existing distributions. The reason is that I've hurt my knee quite badly, had to have an operation and thus spent the last week and a half effectively in bed. Luckily the new MacBook has arrived in time and so every day was spent hacking Perl distributions from morning to night. I've also taken copious amounts of medication (mostly painkillers). Listening to minimalistic music (Rei Harakami, Ryoji Ikeda etc.) made the hacking sessions even more enjoyable.

Ideas beget ideas, however, so the TODO file has actually grown...

Thursday November 08, 2007
11:01 AM

Module-Changes

There has been some discussion about a machine-readable Changes file.

I'm maintaining a few distributions myself and have phases of making some changes to several distributions. Opening the Changes file, copying a few lines, inserting the current date and time and so on gets tedious.

I wanted to have a command-line tool with which to interact with Changes files. Also the Changes file should be machine-readable. So I wrote Module-Changes. I've chosen YAML for the format, although this is by no means mandatory - it's easy to write a new parser or formatter for your format of choice. Integration of new parsers, formatters etc. is something I still have to work on, though.

Some see YAML as a "failed format", but enough people (me included) find it useful and easy to read for both humans and machines, so that's what I've chosen as the default format. Even so, we need to agree on a YAML schema - that is, the layout of the Changes file.

This is not set in stone, it's more of a proposal. I'm hoping for a discussion of what people like or don't like in the current version, and what they would like to see in future versions.

Tuesday October 23, 2007
01:50 PM

Using ack for vim's :grep

If you want the power of Perl regular expressions in with vim's :grep command, put this line into your .vimrc:

set grepprg=ack\ --nocolor\ --nogroup\ '$*'\ *\ /dev/null

Now you can use ack from within vim like this:

:grep some search.*string

If you want to be able to move through the results (opening each one in the editor on the line the result occurred), add these lines to your .vimrc:

nmap <C-v><C-n> :cnext<CR>
imap <C-v><C-n> <Esc><C-v><C-n>
nmap <C-v><C-p> :cprev<CR>
imap <C-v><C-p> <Esc><C-v><C-p>

Now <C-v><C-n> will go to the next search result, both from command and insert mode, and <C-v><C-p> will go to the previous search result.

Sunday September 16, 2007
08:19 AM

18 Days Later

[ Cross-posted from http://hanekomu.vox.com ]

Has it really only been two and a half weeks since the end of YAPC::EU 2007? In this relatively short time I have already discovered some new technologies/toys, and have started to use others that I knew before but didn't really have any time or motivation to explore. Among them are, in no particular order: Plagger, SVK, del.icio.us, Twitter, Vox, Web::Scraper, ShipIt, Module::Install, miro, tvrss.net, OpenID, last.fm, SlideShare, Dopplr, TheSchwartz, Kwalify, Data::ObjectDriver, memcached, Test::Base and more.

I feel a bit like I imagine the Japanese must have felt in 1854 when Commodore Perry landed and pretty much kicked them into the technological reality of the day.

Normally a month goes by much too fast, but this time there's been so much happening in those 18 days that YAPC::EU feels long ago already.

So, thank you, Perl community; thank you, YAPC::EU!

Friday September 14, 2007
04:40 AM

Plagger hack: Engrish.com Recent Discoveries custom feed

Original posting in my Vox blog.