Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

ajt (2546)

ajt
  (email not shown publicly)
http://www.iredale.net/

UK based. Perl, XML/HTTP, SAP, Debian hacker.

  • CPAN: ATRICKETT [cpan.org]
  • PerlMonks: ajt [perlmonks.org]
  • Local LUG: AdamTrickett [lug.org.uk]
  • Debian Administration: ajt [debian-adm...ration.org]
  • LinkedIn: drajt [linkedin.com]

Journal of ajt (2546)

Friday February 28, 2003
08:50 AM

File Handles on Linux/Apache

[ #10835 ]

I'm having a torrid time with File::Temp. I like the module and it works mostly well, but I'm not doing something correctly with it.

In an Apache/mod_Perl PerlRun applicatiion I have the following happening many times:

{  # start scope
  ($SFH, $softer) = tempfile(SUFFIX => ".data",
                             DIR => "/tmp/",
                             UNLINK => 1);
  ... stuff ...
  unlink0 ($SFH, $softer);
}  # end scope

This appears to works on Windows (Perl 5.6.1, Apache 1.3.26, File::Temp 0.12), without a problem, I don't run out of file handles, and I don't have files left over, but Windows isn't used as much. On the Linux box, same spec, much higher load levels, I've had miserable problems.

  • Files left behind, and I eventually run out of file handles and strange things happen on the box. I restart Apache, and delete the temp files, and all is well.
  • I added the unlink0 option, which removes the files, but does not remove the file-handle. I get no files left over, but I still get strange file-handle problems.
  • I'm currently trying an explicit close on the filehandle after the unlink to see if that helps.

As far as I can tell it's simply confused because it's running under mod_Perl, and the Perl processor never ends, so the normal end of Perl clean up isn't happening. The way I would expect.

Trying to find out how many file handles a Linux system can or cannot have hasn't been easy either. I've done quite a bit of Goolging, but there the same thing out there many times, but so far I've not discovered any deep magic.

For example on two nearly identical RedHat 7.x systems cat /proc/sys/fs/file-nr gives:

  • 8192, 3110, 16384
  • 1891, 1349, 8192
  • allocated, free, max file handles

It took me a long time to figure out that the middle number is the number of free, not the number of used file handles, and hence a small number is bad.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Have a look at the code. I seem to recall that those numbers don't mean what you think that they mean...

    From /usr/src/linux/Documentation/sysctl/fs.txt:

    The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. The three values in file-nr denote the number of alloc

  • tempfile (Score:2, Informative)

    I've found tempfile() easiest to work with when you don't specify any parameters. That way, you just get back a filehandle and a name. If you only want a filehandle to play with, it gets even easier as the autodelete should happen automatically when the $fh goes out of scope.

    What happens if you put a close $fh in there before you leave the routine?

    -Dom

    • Dom2,

      Useful comments. I agree it's a great module, the problem started when I was working with it on NT and Linux. When I'd finished NT was working okay, but Linux was not....

      If you have an explicit close $fh, then what I found was that the file was NOT deleted, and I can't remember if the file handle was. I've even tried an explicit unlink too.

      Like I said I feel part of the problem is that it's running under mod_perl PerlRun, which is a bit of an odd place!

      Since I posted, I'm now at: 8192, 2781, 16

      --
      -- "It's not magic, it's work..."
      • As I said in the first reply, I don't necessarily think those kernel figures are anything to worry about. But I would be concerned about the undeleted files.

        The docs state that a file is not auto deleted, if a filehandle is requested as well as a filename. You have to unlink it yourself.

        One more thing; You might wish to check whether or not the unlink fails: unlink( $fname ) or warn "unlink($fname): $!\n"

        Oh, and to see whether filehandles are leaking in real, useful terms, the best thing to do is r

        • Dom,

          I think I know what is going on! I created the following little script, popped it into my PerlRun folder on my home Linux box and tested all the permutations.

          #!/usr/bin/perl
          $|++;

          use strict;
          use CGI;
          use File::Temp qw(tempfile unlink0);

          my $q = CGI->new;

          print $q->header(-type => "text/plain");

          print "PID: $$\n";
          print `cat /proc/sys/fs/file-nr`;

          for (1..50) {
              my ($TMP_FH, $tmp_file) = tempfile(SUFFIX => ".f", UNLINK => 0);
              print "Created: $tmp_file\t";
           

          --
          -- "It's not magic, it's work..."