Linux : kernel httpd invoked oom-killer

By | February 11, 2016

You have noticed such behaviour in the system logs (/var/log/messages)?

kernel: httpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
kernel: Call Trace:
kernel: [<ffffffff800cb67d>] out_of_memory+0x8e/0x2f3
kernel: [<ffffffff8002e493>] __wake_up+0x38/0x4f
kernel: [<ffffffff8000a7ea>] get_page_from_freelist+0x2/0x442
kernel: [<ffffffff8000f6a2>] __alloc_pages+0x27f/0x308
kernel: [<ffffffff8001308d>] __do_page_cache_readahead+0x96/0x17b
kernel: [<ffffffff800139d6>] filemap_nopage+0x14c/0x360
kernel: [<ffffffff80008972>] __handle_mm_fault+0x1fd/0x103b
kernel: [<ffffffff8000c795>] _atomic_dec_and_lock+0x39/0x57
kernel: [<ffffffff800671ae>] do_page_fault+0x499/0x842
kernel: [<ffffffff8005ddf9>] error_exit+0x0/0x84

Additionally, the system was hanging/very slow at that time, memory was running out and swap was being eaten…

Well for some reason (bug or sudden increase of traffic), all available memory has been allocated to one or many system process (the above can be noticed on multiple system process, anyone that could be requesting access to memory). In the present situation, it was Apache that was the culprit, and the behaviour was seen with Apache, MySQL, named, cron and much more.

I would suggest investigating the following :

  • Using “sar” to track down the load/ressource usage behaviour around that time

  • Usage on the first process that caused the oom-killer / out_of_memory event

  • Traffic pattern at that time

  • Scheduled cron jobs that could have triggered the event

If none of the above could explain the peak usage, you are maybe facing a software or kernel bug. Try updating the involved piece of software and kernel could solve this issue.

Now, why the “out of memory” situation happened and how could we prevent it?

Keep in mind that by default, the kernel will allocate memory regardless of the current usage or the total amount of memory on the system.

You can improve that behaviour by applying the following kernel parameters (/etc/sysctl.conf) :

vm.overcommit_memory = 2
vm.overcommit_ratio = 80

NOTE : Reboot is required to apply sysctl.conf parameters. You may issue the “sysctl -w vm.overcommit_memory=2” command for example to apply the setting right away, but do not forget to add to sysctl.conf since these commands won’t survive to reboot.

Here is the possible values and explanation of the “overcommit_memory” parameter :

0: heuristic overcommit (this is the default)
1: always overcommit, never check
2: always check, never overcommit

Additionally, on this particular issue which was involving Apache, you may limit the memory per process in “httpd.conf” :

RLimitMEM <value_in_bytes>

As example, it you want to limit process size to 128 megabytes, the parameter and value will be :

RLimitMEM 134217728