Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

RGiersig (3217)

Journal of RGiersig (3217)

Thursday November 22, 2007
10:09 AM

Handy tools: Web::Scraper and XPather (Firefox)

[ #34954 ]

So we have this custom monitoring tool that produces a beautiful webpage with a table saying "15 minutes ago all but one of my test requests were fine, 30 mins ago all were fine" etc. Well, we wanted to integrate that into Nagios so we don't have to stare at that screen the whole day. A colleague started off writing a check script using wget and grep. Ugly!

But I remembered Web::Scraper from the last YAPC::Vienna, so I'd thought I give it a shot too. Knowing that it can use XPaths I looked if Firefox could tell me the xpath to a certain element. And surely, there is that cool XPather extension!

So I installed it from the Mozilla extension site, pointed my mouse cursor to that element on the monitoring page I wanted to capture and with a click I had the xpath to that element. Started up the scraper CLI that comes with Web::Scraper and experimented with the xpath until I had all the elements I needed. Plugged everything together in a Nagios check script and was done after maybe 30 minutes.

Oh well, Web::Scrapers documentation is nearly non-existant, but it is so easy to use that the examples that come with it and this presentation from YAPC::Vienna gave me enough info to get a running start. And of course there is always that old "Use the source, Luke!"... ;-)

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.