Slash Boxes
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

grantm (164)

  (email not shown publicly)

Just a simple [] guy, hacking Perl for fun and profit since way back in the last millenium. You may find me hanging around in the monestary [].

What am I working on right now? Probably the Sprog project [].

GnuPG key Fingerprint:
6CA8 2022 5006 70E9 2D66
AE3F 1AF1 A20A 4CC0 0851

Journal of grantm (164)

Monday September 05, 2005
04:55 PM

mod_ssl + mod_perl segfault fun

[ #26606 ]

We recently launched a new mod_perl app that is accessed via SSL. Soon after launch, it became clear that we were getting lots of segfaults recorded in our error log:

[Tue Sep 4 03:42:47 2005] [notice] child pid 8195 exit signal Segmentation fault (11)

We cranked up the Apache LogLevel to 'debug' and found that almost every segfault message was preceded by something like this:

[Tue Sep 4 03:42:47 2005] [info] [client] mod_perl: Apache->print timed out

A bit of googling turned up this message which suggests a bug in mod_ssl's timeout handler could cause segfaults.

It turns out that all the timeouts were occurring while the app was sending a PDF generated on the fly by our app. The PDF generation is very quick (thanks to PDF::Reuse) but the files are about 250KB. Assuming a clean connection with 56kbps throughput, that's going to take around 45 seconds to download. As it happens, the app is targetted at people who are travelling internationally, so many (if not most) users get much less bandwidth than 56kpbs.

Anyway, it turns out that for reasons nobody can explain, our Apache config included this line:

TimeOut 60

Bumping it up to 600 (10 minutes) has vastly decreased the incidence of segfaulting, although we've still had about a dozen in the last 24hrs. I could increase it further, but having one Apache/mod_perl process tied up handling one request for more than 10 minutes is not ideal. Luckily it's not a particularly high-traffic app.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
More | Login | Reply
Loading... please wait.
  • if you're using mod_perl to serve up the PDF part of the problem might be that you're not byteserving properly (or at all). see recipe 6.7 in the mod_perl developer's cookbook [] for a full explanation of why byteserving is important for PDF files, or at the very least the code from that recipe, which can be found here []. the important part is near the end when it talks about range requests and calls $r->set _content_length()

    of course, if you're byteserving already, then I have no ideas :)

  • Each PDF has unique contents. Even if the same person downloads multiple times, each document will contain (amongst other things) a unique serial number. So we have gone to some lengths to make clients aware that byte serving is not an option.

    But the cause of the problem is clearly an issue with mod_ssl's timeout handling. If the downloads complete before the timeout there are no segfaults. If we don't use SSL there are no segfaults. Slow downloads + SSL = segfaults.

    • well, byteserving doesn't necessarily have anything to do with unique content - PDF plugins will try to request parts of large documents as you scroll through them. so, the reason I was mentioning it is that if slow downloads are causing problems you can avoid that in part by properly byteserving. if uniqueness is your issue, that ought to be handled by proper cache headers.
    • I've seen similar issues w/ mod_ssl, and have submitted a patch or two for some of them.

      What version of Apache, mod_ssl, and mod_perl are you using?


      • What version of Apache, mod_ssl, and mod_perl are you using?

        Well the system is running the old Debian stable (woody) so the versions are very old. Apache is 1.3.26, mod_perl is 1.26 and I think mod_ssl is 1.48.