Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • That is very clever!

    A random idea: measuring the degree of redundancy in a Perl source file. I think it'd have to be something other that just simple text redundancy, since you dan't want short symbol names (or hash key) being favored over long ones.

    • It was interesting to run it against things like XML/Parser.pm, CGI.pm, LWP.pm. LWP.pm has only 3 repeated lines (of size 5 char or more).

      Of course the statistic needs to be weighted by the number of lines and length of the lines.

      What is debatable is whether one should toss single-line idioms such as

      @out = sort {$a $b} @list;

      which might occur in several places, into a single subroutine.
  • I wonder if you might get additional traction by charting the data. I'm thinking along the lines of using GD, and dropping a dot at each "X repeats at Y" point. You'd end up with an upper diagnoal matrix that should show diagonal stripes where repeats occur.
  • A suggestion: to prevent wear and tear on those poor highlighters, You might want to emit HTML colorized output (use a templating system if you want to let others develop their own output formats, TT2 [cpan.org] and HTML::Template [cpan.org] serve opposite ends of the complexity spectrum, not to mention things like HTML::Mason [cpan.org]).

    I also wanted to mention mjd's Algorithm:::Diff [cpan.org] (which Text::Diff [cpan.org] uses) in case that would help you find longest common sequences.

    - Barrie

    • Definitely would do so if I either had a color printer or a laptop with a large monitor to take to the code review meeting. My 10" just isn't big enough.

      I suspected someone like mjd would have worked this before...but taking a look at the two modules you reference, I can't immediately see how you'd apply them to a single file.
      • Algorith::Diff could be used to, given a chunk of code that you've already identified as repeated, look for "similar enough" chunks of the file. It calculates longest common subsequences, so that can be used to identify chunks of repeated code, is all (another poster mentioned something which triggered me to think of A::D).

        You could also use it to diff the lines that were tweaked between the original code chunk and the copy-paste-tweaked code chunk. Lots of visual diffs do that sort of char-by-char diff