Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • It's already a common practice to compile your templates before forking, but having a nice modulized version of that is handy. I'm not seeing the point of the separate cache though. Does it actually have any effect other than skipping stat calls, which you could do with STAT_TTL? I suppose it would allow to "pin" your favorite templates in memory while setting a small cache size for everything else, but the LRU cache should do the right thing for most people.
    • Removing the need for STAT_TTL is a minor convenience.

      But a more interesting improvement is in two main scenarios.

      Firstly, when you don't want to load all your templates, but you still don't want to allow for infinite cache growth.

      Anything loaded before the fork should NEVER be destroyed, as the mere recycling of that memory becomes the USE (and bloat) of that memory. Better to isolate the "free" templates from the rest of the caching logic.

      And secondly, it's a much simpler cache (just a hash) compared to the built in one, which is a linked list with stat times, and a hash index over the top.

      That cache routinely re-orders the linked list each time it retrieves a template, so while both mechanism don't memory bloat as a result of the Template::Document objects themselves, the default cache will continuously cycle the template structure, which will result in memory bloat for those memory structures.

      So the time for Template::Provider->fetch to run, and the memory use it results in, is reduced dramatically. We go from multiple layers of checks, multiple method calls, a call to get the system time, and linked list reordering, to a single hash call.

      It may not save gigabytes of RAM and big percentages of CPU, but it does result in the operation of fetching a Template::Document being made about as efficient as it is possible to be. It completely recovers all potential resource savings available from caching.