I’ve been having an annoying problem with my system. Occasionally – most apparent when Firefox would try to load large animated GIFs – it would run out of memory and proceed to thrash the harddisk for a very long time, during which it would be very nearly unresponsive. This would easily last minutes, sometimes more than a quarter of an hour. Eventually, the Linux OOM killer would come out and shoot a random app in the face – usually my aggregator – in order to get things back under control.
This was puzzling. See, I don’t have any swap configured on this machine. In fact, I don’t even have a swap partition!
It turns out that what happened is that the system would be thrashing on read-only objects – pages with code. It would page out some code, page in some other from the executable image on disk, run a few instructions, then throw the newly swapped in code page again in order to load yet other code. Of course, none of this was cachable, as the cache itself takes memory, which was in short supply to begin with. Needless to say, interspersing hardware disk access into every couple hundred instructions slowed execution down to the levels of a ZX-81 (at most).
What is going on?
It turns out that Linux promises too much memory to applications. It promises more than is physically present! This is known as overcommit. It’s a good thing most of the time, because applications usually only need such a lot of memory for short periods, and the presence of swap and the ability to load executable pages directly from the program binary means that the request can really be satisfied. This keeps apps able to run under circumstances in which they would otherwise crash with an “out of memory” error; or lets them use RAM under circumstances in which they would fall back to their own swapping-data-to-disk code. So in general, promising apps too much memory paradoxically keeps servers more stable and responsive.
But that presumes you really do have swap! See, the amount of make-believe RAM is based on the physical RAM. By default, the Linux kernel promises 50% more memory in total than there is physical RAM! That’s fine if you go by the olden times’ rule of thumb that one should have twice as much swap space as physical RAM – in that case, the kernel’s empty promise won’t make too huge a difference.
But RAM is dirt cheap now. I have 1.5GB in this machine. Why would I add any swap? This isn’t a server, where stability is imperative. It’s a desktop system, where responsiveness is critical. I don’t want it going to disk, ever.
Now you can see what happens on a system with 1.5GB of RAM and no swap: the Linux kernel operates on the assumption that there are 750MB of make-believe memory, for a total of 2.25GB. Imagine that it promises this much memory to Firefox. Firefox asks for that much and starts to use it until it gets close to 1.5GB. The kernel notices it’s in a bind – but nothing is there to swap out except code pages! So the system rapidly slows to a crawl, while Firefox continues to try to use up more of the promised memory. Due to all the thrashing it takes exponentially longer to keep using up memory to reach the limit at which it is no longer possible for the kernel to make up for the promise. Once that point is finally, finally reached, some 5–10 minutes later, the OOM killer wakes up and starts indiscriminately canning processes.
What to do?
sudo sysctl -w vm.overcommit_ratio=5
Fortunately, the kernel’s empty promises are configurable in
/proc/sys/vm/overcommit_ratio. The default is 50, which means 50%, as explained; I set it to a saner (for a no-swap system) 5%. To make this permanent, I added the setting to
A bit of overcommit is still useful, of course; even on a no-swap system, paging out a few read-only objects is not a problem.