Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

scot (6695)

scot
  (email not shown publicly)
http://redcloudresearch.com/

Perl hacker
"apprentice" sysadmin for ASP running Sparc-Solaris-Java-Oracle stack

Journal of scot (6695)

Wednesday April 04, 2007
05:51 PM

Method for extracting urls for filetypes + auto-retrieval

[ #32911 ]

This post reminded me of a problem which I have been trying to solve involving extracting URL's pointing to a specific filetype (say a gz archive) from a web page. It turns out that at CPAN there is a page which contains an alphabetical list of all modules, with a hyperlink to the tar.gz file of each module.

The following code (given appropriate substitution of the command line input; ie gz for pdf) will create a text file with all of the URL's for the tar.gz files:

use strict;
use LWP::Simple;
use HTML::SimpleLinkExtor;
#usage getfileoftype http://www.example.com pdf > urllist.txt
my $url = shift;
my $filetype = shift;
my $filetypelen = length($filetype);
my $offset = -$filetypelen;
#print $filetypelen."\n";
#print $offset."\n";
my $fileget = getstore($url,"tempfile.html");
my $extor = HTML::SimpleLinkExtor->new(); $extor->parse_file("tempfile.html");
my @a_hrefs = $extor->a;
for my $element (@a_hrefs) {
# print $element;
# print "\n";
my $suffix = substr($element,$offset,$filetypelen);
# print $suffix;
# print "\n"; if ($suffix =~ m/$filetype/){
print $element;
print "\n";
}
}

Once you have that, you can then use the following code to automatically download all of the modules if you so choose, or whatever subset of the modules you wish to extract from the text file created by the above code:

use strict;
use LWP::Simple;
use File::Basename;
open (DATA, "urllist.txt") || die "File open failure!";
while (my $downloadurl = <DATA>){
(my $name, my $path, my $suffix) = fileparse($downloadurl);
my $finurl = $downloadurl;
print $finurl."\n";
my $savefilename = $name.$suffix;
print $savefilename;
print "\n";
my $status = getstore($finurl,$savefilename); print $status."\n"
}

Both pieces of code work nicely on my WinXP box. Yes, I know that "tempfile.html" gets clobbered, but I was just glad to get this code working, and WinXP doesn't seem to care. In any case, one can now generate a local repository of modules. Suggestions for improvement in my code are welcome!

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Have you seen this [stonehenge.com] and this [cpan.org]?

    --
    J. David works really hard, has a passion for writing good software, and knows many of the world's best Perl programmers
    • I had looked at the CPAN::Mini module but didn't notice the link to the article. My objective with the code was to allow me to use it for extracting files from any web page, not just CPAN. For example, occasionally I'll do a Google search for pdf documents on a particular subject. Now I can save the search page; use the first script to get the pdf links, and use the second to quickly download them all.

      No doubt somewhere someone has produced similar code but I learned a lot writing these scripts in any c