While playing around with ant algorithms to solve the traveling salesman problem, I realized that the procedural code I was using would probably be easier to read and maintain if I converted it to object oriented code. My ants turned into snails. Eventually I tried hand-tuned OO, my Class::BuildMethods, and Moose. Of course, ant algorithms are AI and and AI is computationally intensive, so while Perl makes it easy to code AI solutions, their runtime performance leaves much to be desired. The original C code was from AI Application Programming, 2nd Edition, by M. Tim Jones, and it runs much faster than the Perl version, of course.
Here are typical sample runs of the Perl code:
the code took:17 wallclock secs (15.73 usr + 0.07 sys = 15.80 CPU)
OO (blessed arrayref)
the code took:34 wallclock secs (32.65 usr + 0.13 sys = 32.78 CPU)
OO (blessed hashref):
the code took:35 wallclock secs (33.31 usr + 0.12 sys = 33.43 CPU)
the code took:44 wallclock secs (41.98 usr + 0.16 sys = 42.14 CPU)
the code took:37 wallclock secs (35.26 usr + 0.16 sys = 35.42 CPU
Moose was not appreciably slower than traditional OO methods, though my Moose-fu is lacking. My Class::BuildMethods was the worst, but then, that's the price I pay for my "quick prototype" code.
It was interesting to note that the arrayref and the hashref didn't have an appreciable performance difference, but then, even inside of the classes, I was only reaching directly into the data structure in the getter/setters. Most of this overhead appears to be the method calls.
While do some fairly aggressive hand-tuning of the blessed arrayrefs, but still respecting encapsulation outside of the class, I managed to get the following benchmark:
the code took:29 wallclock secs (27.94 usr + 0.13 sys = 28.07 CPU)
That's still over 50% slower than the procedural version, but it does show that performance benefits can be gained when needed.