So we have this custom monitoring tool that produces a beautiful webpage with a table saying "15 minutes ago all but one of my test requests were fine, 30 mins ago all were fine" etc. Well, we wanted to integrate that into Nagios so we don't have to stare at that screen the whole day. A colleague started off writing a check script using wget and grep. Ugly!
But I remembered Web::Scraper from the last YAPC::Vienna, so I'd thought I give it a shot too. Knowing that it can use XPaths I looked if Firefox could tell me the xpath to a certain element. And surely, there is that cool XPather extension!
So I installed it from the Mozilla extension site, pointed my mouse cursor to that element on the monitoring page I wanted to capture and with a click I had the xpath to that element. Started up the scraper CLI that comes with Web::Scraper and experimented with the xpath until I had all the elements I needed. Plugged everything together in a Nagios check script and was done after maybe 30 minutes.
Oh well, Web::Scrapers documentation is nearly non-existant, but it is so easy to use that the examples that come with it and this presentation from YAPC::Vienna gave me enough info to get a running start. And of course there is always that old "Use the source, Luke!"...