Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

jplindstrom (594)

  (email not shown publicly)

Journal of jplindstrom (594)

Monday July 19, 2004
03:29 AM

Profilers and profiling

[ #19919 ]

So, at least two profilers worked fine. Which is nice, I haven't had great success in that department historically.

Devel::Profile is a debugger-based sub-routine level profiler which gives you total execution time spent in each sub as well as the call count. The overhead of instrumentation is about 10%. Very handy, but it would be useful to be able to disable reporting for certain modules/subs.

For example, in the application I profiled (an Apache-based OLTP application based on Class::DBI on e.g. MySQL, and CGI::Session for session state persistence) more than a third of the execution time was spent in DBI::db::commit, DBI::db::ping and DBI::st::execute, which means they mostly obscure the things that actually can be tuned outside the database, i.e. my application. This is interesting, because since the database is on a different machine, the mod_perl application itself is CPU bound and game for profiling and optimization.

I'm not sure whether Devel::Profile can be used to profile code running under mod_perl since it's supposed to be invoked lik so

    perl -d:Profile

but for me that wasn't a problem. TDD tends to result in a nice design and the call dispatch is completely decoupled from Apache. That's where I drive the performance run.

This is the output of Devel::Profile

%Time    Sec.     #calls   sec/call  F  name
22.97    1.3708       23   0.059601     DBI::db::commit
  5.89    0.3514      317   0.001108     DBI::db::ping
  5.49    0.3277     7195   0.000046     <anon>:...e/lib/Class/DBI/
  4.75    0.2836      251   0.001130     DBI::st::execute
  3.62    0.2159     7263   0.000030     Class::DBI::Column::name_lc
  2.94    0.1752      207   0.000846     Class::DBI::_init
  2.29    0.1364     8844   0.000015     Class::Accessor::get
  1.35    0.0807       20   0.004035     GS::Server::Dispatcher::Foo::Bar::dispatch_baz

Interesting runner up: Stringification of the Class::DBI::Column objects. All three lines with #calls around 8000 are a result of the Class::DBI::Column

  use overload
      '""'     => sub { shift->name_lc },
      fallback => 1;

List of columns are stringified all over the place in Class::DBI, so this could be a good place to cheat.

  shift->name_lc    --unrolls-to-->    lc($_[0]->{name})

By violating DRY and breaking encapsulation in a horrible way, the run time decreased with 17% in this particular case. Maybe core functions aren't instrumented because the 7195 executions of ":...e/lib/Class/DBI/" is nowhere to be found in the profiling report anymore.

But what about memoize? That would eliminate the lookup without screwing with the code structure. Alas, memoize is for caching the result of expensive calls, and this isn't such a case.


isn't really expensive, only very, very frequent. So the cost of non-overloaded stringification (to avoid infinite recursion) plus memoize is more work than the original code.

I'm still not certain it's worth it. Devel::Profile is good for finding low-level inefficiencies, but a weak tool for identifying things at a higher level that should be done differently or not at all. I think that manually instrumenting the code, with full knowledge of the business logic, is the way to go there.

The other profiling module I managed to get to work, Devel::SmallProf, is also debugger based, but concentrates on individual lines of code instead of sub routines. Like Devel::Profile it provides execution count and timing. The output is a little bit flaky with "page breaks" inserted at random intervals. Here is an example.

       22 0.000142 0.000000    52:   my $method = $rhRequest->{"method"};
       22 0.000128 0.000000    53:   my $methodDispatch = "dispatch_$method";
       22 0.000139 0.010000    54:   $rhClassKey->{$domain} or die("Invalid
       22 0.000120 0.000000    55:   $rhClassKey->{$domain}->{$method} or
        0 0.000000 0.000000    56:
&#12;           ================ SmallProf version 1.15 ================
                   Profile of ../lib/GS/Server/           Page 157
    count wall tm  cpu time line
       22 0.000117 0.000000    57:   my $class =
       22 0.000234 0.000000    58:   $rhResponse = $class-
        0 0.000000 0.000000    59:
        0 0.000000 0.000000    60:
        0 0.000000 0.000000    61:  #Commit
       22 0.000162 0.000000    62:  GS->dbi_commit("Main");
        0 0.000000 0.000000    63: };
       22 0.000097 0.010000    64: if($@) {
        0 0.000000 0.000000    65:  my $err = $@;
        0 0.000000 0.000000    66:  eval { GS->dbi_rollback("Main"); };
        0 0.000000 0.000000    67:  $rhResponse = { statusMessage => "Internal
        0 0.000000 0.000000    68:  ##todo: log error
        0 0.000000 0.000000    69: }

The output could use a HTML-ization with color coded lines to make the expensive ones stand out, and maybe a ToC to jump to the lines with highest count and execution time. As it is now, it's a little too easy to gloss over all the code and miss the interesting parts. Visualization matters.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • Modern versions of the DBI come with DBI::ProfileDumper and dbiprof, which are useful. They let you know how long your SQL took to execute.


  • To run mod_perl with Devel::Profile, try this:

      % setenv PERL5OPT -d:Profile
      % httpd -X -d `pwd` &
      ... make some requests to the server here ...
      % kill `cat logs/`
      % unsetenv PERL5OPT
      % dprofpp

    If Devel::Profile works for you, Devel::DProf should too.