use strict;
use LWP::Simple;
use POSIX;
my $refresh = 1800;
do {
my $base="http://radar.weather.gov/ridge/RadarImg/N0R/MTX_N0R_0.gif";
my $timestring = strftime( "%Y%m%d_%H%M%S", gmtime() );
my $savefilename = "SLCRadar_".$timestring.".gif"; my $status = getstore($base,$savefilename);
print $status."\n";
sleep $refresh;
} while ($refresh);
<blockquote>See my Standard Code Disclaimer
NOAA updates its weather satellite images on the GOES website every 30 minutes, with images in infrared, visible light, and water vapor wavelengths; for both the eastern and western halves of the US. The script below grabs those images and saves them to a local file with a timestamp. Could prove useful for doing your own weather forecasting or whatever....I have updated the script to use the "strftime" function from the POSIX module,and the filenames now are very precise (for example "eastconusir20070530_074348.jpg"). I also updated the script to timestamp the images with Greenwich Mean Time...thanks to graff and jasonk over at Perlmonks for their feedback...
use strict;
use LWP::Simple;
use POSIX;
my $image;
my $url;
my %images = (
"eastconusir" => "http://www.goes.noaa.gov/GIFS/ECIR.JPG",
"eastconusvis" => "http://www.goes.noaa.gov/GIFS/ECVS.JPG",
"eastconuswv" => "http://www.goes.noaa.gov/GIFS/ECWV.JPG",
"westconusir" => "http://www.goes.noaa.gov/GIFS/WCIR.JPG",
"westconusvis" => "http://www.goes.noaa.gov/GIFS/WCVS.JPG",
"westconusvwv" => "http://www.goes.noaa.gov/GIFS/WCWV.JPG",
);
foreach my $key (keys %images) {
print $key."\n";
my $epoch = ( head( $images{$key} ) )[ 2 ] || next;
my $timestring = strftime( "%Y%m%d_%H%M%S", gmtime( $epoch ) );
print $timestring."\n";
print $images{$key}."\n";
my $status = getstore($images{$key},$key.$timestring.".jpg");
print $status."\n";
};
This post reminded me of a problem which I have been trying to solve involving extracting URL's pointing to a specific filetype (say a gz archive) from a web page. It turns out that at CPAN there is a page which contains an alphabetical list of all modules, with a hyperlink to the tar.gz file of each module.
The following code (given appropriate substitution of the command line input; ie gz for pdf) will create a text file with all of the URL's for the tar.gz files:
use strict;
use LWP::Simple;
use HTML::SimpleLinkExtor;
#usage getfileoftype http://www.example.com pdf > urllist.txt
my $url = shift;
my $filetype = shift;
my $filetypelen = length($filetype);
my $offset = -$filetypelen;
#print $filetypelen."\n";
#print $offset."\n";
my $fileget = getstore($url,"tempfile.html");
my $extor = HTML::SimpleLinkExtor->new(); $extor->parse_file("tempfile.html");
my @a_hrefs = $extor->a;
for my $element (@a_hrefs) {
# print $element;
# print "\n";
my $suffix = substr($element,$offset,$filetypelen);
# print $suffix;
# print "\n"; if ($suffix =~ m/$filetype/){
print $element;
print "\n";
}
}
Once you have that, you can then use the following code to automatically download all of the modules if you so choose, or whatever subset of the modules you wish to extract from the text file created by the above code:
use strict;
use LWP::Simple;
use File::Basename;
open (DATA, "urllist.txt") || die "File open failure!";
while (my $downloadurl = <DATA>){
(my $name, my $path, my $suffix) = fileparse($downloadurl);
my $finurl = $downloadurl;
print $finurl."\n";
my $savefilename = $name.$suffix;
print $savefilename;
print "\n";
my $status = getstore($finurl,$savefilename); print $status."\n"
}
Both pieces of code work nicely on my WinXP box. Yes, I know that "tempfile.html" gets clobbered, but I was just glad to get this code working, and WinXP doesn't seem to care. In any case, one can now generate a local repository of modules. Suggestions for improvement in my code are welcome!
Found this JavaCGI Bridge Presentation which based on a cursory read looks like an alternative to AJAX. The gist is this:
JavaCGIBridge Solution
CGI/Perl advantages
* CGI/Perl can provide database connectivity more easily
* Application related "business-rules" are centrally maintained and executed in CGI/Perl scripts
* Browsers that don't support Java can still use the application
* Leverage existing "legacy" CGI/Perl code
* Many Internet Service Providers only allow CGI/Perl access (no daemons)
Java advantages
* Java applet becomes a truly thin client
- only requires GUI code and GUI logic
- JavaCGIBridge class adds ~5k overhead
- No need to learn the entire Java class library.
- You can get by with AWT (Forms) and some utility classes
* Java applet can maintain state between all CGI script calls
* Java applet can cache retrieved data
- eliminates the need to constantly get redundant data from the server
Check the date of the presentation: 1997
It took me far longer than I thought it would to come up with this code that grabs a web page and stuffs all the page's hyperlinks into a text file.
Updated...
use strict;
use WWW::Mechanize;
#usage perl linkextractor.pl http://www.example.com/ > output.txt
my $url = shift;
my $mech = WWW::Mechanize->new();
$mech->get($url);
my $status=$mech->status();
print $status." OK-URL request succeeded."."\n";
my @links = $mech->links;
print STDOUT ($_->url, $/) foreach $mech->links;