After my initial experiments at using the CPAN as a test run for parsing all the Perl in the entire world (that is visible at least), it became clear quite quickly that I needed to put a lot of my focus on performance in order to get any further.
So I'm made a couple of new changes to the experimental Perl::Metrics2 (not currently on CPAN) to scale up.
The first and most important change is architectural, splitting the capture and indexing of data from the analysis of that data. That also meant making the metrics engine natively aware that it might be working against a PPI::Cache
The pm2minicpan script takes the location of a minicpan. It will run a CPAN::Mini::Visit iteration over the minicpan, identify all the "Perl Files" in each archive, invoke PPI to parse them and populate the parse cache, and save the list of md5 cache keys for the documents into the "cpan file index" against the specific file names.
This has the secondary benefit that if 1000 distributions use Module::Install or bundle the same author test, I only need to run the metrics on each distinct document instead of all instances of it. Deduplication like this will probably be essential later, when I encounter GreyPAN repositories that have bundled copies of their dependencies.
A full indexer run currently yields a PPI document cache of about 6gig. However, because Archive::Tar is memory-bloaty on Win32 I've excluded any tarballs larger than 3meg to avoid hitting the 2gig process memory limit on my machine. I'm also not processing zip or bz2 files yet, so this will grow some more.
The pm2cache performs the second half of the work. Using only the PPI::Cache and it's existing database (and ignoring both the original files and the file index completely) it will perform an inventory of the cache against the set of one or more plugins enabled.
It picks out all the document objects that one-or-more plugins haven't seen yet, generates the metrics for those plugins, and adds them to the database.
Generating the execution plan is fairly cheap too, consuming about half a minute of CPAN and around 50 meg of ram. This should scale (just) to the full DarkPAN scan, although I may need to pay the extra CPU cost and disable some of the shortcut indexes on 32-bit machines if the document store grows too large.
The second type of optimisation I've done is more tactical.
All major operations will now drop all of the database indexes before starting, and regenerate them at the end. This costs 4 or 5 minutes of CPU to do, but reduces the total disk IO read cost during a complete run from 1.5 terabytes to 100-200gigabytes.
I've also added per-1000 dist commits, and added support to ORLite for incremental commits during long reruns without having to reconnect to the database (ORLite is designed to normally stay disconnected from the SQLite file unless absolutely necessary, which removes any opportunity for SQLite to do disk caching).
And finally, when the metrics are inserted, they are batched into a single prepared statement on a per-document basis. I've also added a "hint" to the plugin that allows the caller to assert that the inserts won't clash with any existing records, and that the safety check to delete existing records for that plugin/document combination isn't required.
There's a few more tactical changes left to go.
One of the plugins currently does an useless ->clone of the PPI::Document object so that it can discover the SLOC value. I'd like to change that to let the plugin classes advertise if they are destructive.
That way, I can add a plugin scheduler that will call all the non-destructive plugins first, and the clone the document itself for all EXCEPT the final destructive plugin (which it would allow to consume the main document object).
It's a little fiddly, but since there's often only going to be one destructive plugin acting on the document, it allows a substantial percentage of files to not have to be cloned at all.
After I make these last changes I'll probably have hit something close to the theoretical maximum performance for a serial processing algorithm. Progress beyond that requires parallelism and hashing, which means I need to either move away from SQLite or I need to do batching of metrics in memory, and then only connect/insert/commit every 100 or so documents. (SQLite handles concurrency poorly)
Of course, for now that would be overkill. Because even with my current implementation I can create a new metrics plugin, enable it, and process the entire CPAN in about 5-10 hours. This is plenty fast enough for what I'm doing at the moment, and it runs quite happily in the background of a duel-core machine. Going faster means limiting myself to my quad machine, just to buy another few hours.
And so to results!
Scanning only the core non-pathological and edge cases from only CPAN, I get the following initial numbers.
-- Files ------------------
Total Documents: 196,801
Distinct Documents: 173,436
Cached Documents: 150,121
Smallest Document: "\n\n"
-- Code -----------------------
Total Perl Bytes: 891,245,381
Total Perl Lines: 33,378,998
Total Perl SLOC: 18,524,339
Total PPI Tokens: 157,345,297
Significant Tokens: 94,187,367
These numbers conform to about we expect to see for the CPAN. If the SLOC looks a little on the low side, it's most likely due to the removal of all those tarballs greater than 3 megabytes, which includes Perl itself, bioperl, and some other giants that add a few million more lines between them.
I'll need a way to do streaming tarball extraction and protect my memory from overflowing in Archive::Extract before I can include those big files.
Much more interesting than being able to produce these numbers is the ability to look for specific patterns across the entire CPAN. As a test case, RJBS suggested that I look for all the files that use the now-deprecated magic variable $[.
After leaving it overnight to run the search, and joining the results against the cpan file index, I get the following list of 87 CPAN distributions currently in the index that use the now-deprecated magic variable (and probably need a fresh release to stop using it).
As an aside, I'm just using the Firefox SQLite manager for my miscellaneous queries. It's far more convenient than dedicated programs (as long as the tables are well indexed).
file_metric.md5 = cpan_file.md5
name = 'array_first_element_index'
value = 1
If it's looks a little odd that the same distribution is listed multiple times, this is usually because an old release used a class that was removed in a later version, but the old release was never deleted from the CPAN. This results in the old dist being listed in the index, even if the chances of you actually installing it are almost impossible.