Stories
Slash Boxes
Comments
NOTE: use Perl; is on undef hiatus. You can read content, but you can't post it. More info will be forthcoming forthcomingly.

All the Perl that's Practical to Extract and Report

use Perl Log In

Log In

[ Create a new account ]

Aristotle (5147)

Aristotle
  pagaltzis@gmx.de
http://plasmasturm.org/

Blah blah blah blah blah [technorati.com]

Journal of Aristotle (5147)

Tuesday June 19, 2007
10:14 PM

How not to make empty promises

[ #33556 ]

I’ve been having an annoying problem with my system. Occasionally – most apparent when Firefox would try to load large animated GIFs – it would run out of memory and proceed to thrash the harddisk for a very long time, during which it would be very nearly unresponsive. This would easily last minutes, sometimes more than a quarter of an hour. Eventually, the Linux OOM killer would come out and shoot a random app in the face – usually my aggregator – in order to get things back under control.

This was puzzling. See, I don’t have any swap configured on this machine. In fact, I don’t even have a swap partition!

It turns out that what happened is that the system would be thrashing on read-only objects – pages with code. It would page out some code, page in some other from the executable image on disk, run a few instructions, then throw the newly swapped in code page again in order to load yet other code. Of course, none of this was cachable, as the cache itself takes memory, which was in short supply to begin with. Needless to say, interspersing hardware disk access into every couple hundred instructions slowed execution down to the levels of a ZX-81 (at most).

What is going on?

It turns out that Linux promises too much memory to applications. It promises more than is physically present! This is known as overcommit. It’s a good thing most of the time, because applications usually only need such a lot of memory for short periods, and the presence of swap and the ability to load executable pages directly from the program binary means that the request can really be satisfied. This keeps apps able to run under circumstances in which they would otherwise crash with an “out of memory” error; or lets them use RAM under circumstances in which they would fall back to their own swapping-data-to-disk code. So in general, promising apps too much memory paradoxically keeps servers more stable and responsive.

But that presumes you really do have swap! See, the amount of make-believe RAM is based on the physical RAM. By default, the Linux kernel promises 50% more memory in total than there is physical RAM! That’s fine if you go by the olden times’ rule of thumb that one should have twice as much swap space as physical RAM – in that case, the kernel’s empty promise won’t make too huge a difference.

But RAM is dirt cheap now. I have 1.5GB in this machine. Why would I add any swap? This isn’t a server, where stability is imperative. It’s a desktop system, where responsiveness is critical. I don’t want it going to disk, ever.

Now you can see what happens on a system with 1.5GB of RAM and no swap: the Linux kernel operates on the assumption that there are 750MB of make-believe memory, for a total of 2.25GB. Imagine that it promises this much memory to Firefox. Firefox asks for that much and starts to use it until it gets close to 1.5GB. The kernel notices it’s in a bind – but nothing is there to swap out except code pages! So the system rapidly slows to a crawl, while Firefox continues to try to use up more of the promised memory. Due to all the thrashing it takes exponentially longer to keep using up memory to reach the limit at which it is no longer possible for the kernel to make up for the promise. Once that point is finally, finally reached, some 5–10 minutes later, the OOM killer wakes up and starts indiscriminately canning processes.

Ugh.

What to do?

sudo sysctl -w vm.overcommit_ratio=5

Fortunately, the kernel’s empty promises are configurable in /proc/sys/vm/overcommit_ratio. The default is 50, which means 50%, as explained; I set it to a saner (for a no-swap system) 5%. To make this permanent, I added the setting to /etc/sysctl.conf.

A bit of overcommit is still useful, of course; even on a no-swap system, paging out a few read-only objects is not a problem.

The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
 Full
 Abbreviated
 Hidden
More | Login | Reply
Loading... please wait.
  • Thanks for the report. I hope developers find a way make this kind of change more automatic, because I can't how everyday users would ever figure this out on their own.

    I wonder if Windows or Mac has a saner, more automatic solution for this.
    • Contemporary Linux distributions will set up a buch of swapspace for you by default if you let the installer create a partition layout automatically, so most people won’t encounter this issue.

      I am more particular than that, hence my trouble.

  • For things like Firefox which have tendency (from your account) to use too much memory & molest the hard disk, why don't run them with (something like) limits(1) [freebsd.org]?
    --
    - parv
    • They don’t have a tendency to eat too much memory! Firefox just tries to use up what the kernel promised.

      Ever since I changed the overcommit ratio, unsurprisingly, there hasn’t been a single problem.

      Of course, I could use limits(1) or similar to individually reverse the kernel’s promises on a per-process basis, but why would I do such a backwards thing? I’d rather fix the problem once, at the source: the kernel’s unrealistic promise.

      • Oh, you see "per process basis" is the "fine-grained option". (I do get your point about setting the limit in global scope.)
        --
        - parv