Sometimes when working with virtual images, R or python, I can see that the memory load is really high and then the system becomes completely unresponsive. This post will show you how to optimize memory management in linux so the system does not freeze.
Sysctl parameters
swappiness
This freeze problem can be caused by Linux moving a big chunk of memory to the swap. Because disk access is so slow this can slow down the system until all the date has been moved. It is possible to define when to start moving data to the swap changing the sysctl parameters:
sudo sysctl -w vm.swappiness=10
Where swappiness=100 tells the kernel to always use the swap, and swappiness=0 tells to avoid using the swap as long as possible. Ubuntu recommends a value of 10.
Minimum free memory
Another value that can be updated is vm.min_free_kbytes. This is used to force the Linux VM to keep a minimum number of kilobytes free to avoid out of memory when the kernel tries to swap some data but there is no free memory to do that. According to this thread a good value can be 5% of your RAM divided by the number of cores:
sysctl -w vm.min_free_kbytes=512000
Over commit memory
It is possible to configure the flag vm.overcommit_memory to allow a process to reserve more memory that is really free in the system. As copied from the official documentation three different possible values are available:
value 0: the kernel attempts to estimate the amount of free memory left when userspace requests more memory. This is the DEFAULT value.
value 1: the kernel pretends there is always enough memory until it actually runs out.
value 2: the kernel uses a “never overcommit” policy that attempts to prevent any overcommit of memory. Note that user_reserve_kbytes affects this policy.
This feature can be very useful because there are a lot of programs that malloc() huge amounts of memory “just-in-case” and don’t use much of it. For developers may be it is useful to read this article about how memory is managed.
You can fully disable the over commit memory by executing:
sysctl -w vm.overcommit_memory=2
Out-of-Memory Kernel panic
When there is an out of memory the system can call a process which kills some processes to free memory or can enter into panic mode. A complete guide about how to deal with these situations can be found here.
There is a flag called vm.panic_on_oom which enables or disables panic on out of memory feature. According to the official documentation. It can take different values:
Value 0: the kernel will kill some rogue process, called oom_killer. Usually, oom_killer can kill rogue processes and system will survive. This is the DEFAULT value.
Value 1: the kernel panics when out-of-memory happens. However, if a process limits using nodes by mempolicy/cpusets, and those nodes become memory exhaustion status, one process may be killed by oom-killer. No panic occurs in this case. Because other nodes’ memory may be free. This means system total status may be not fatal yet.
Value 2: the kernel panics compulsorily even on the above-mentioned. Even oom happens under memory cgroup, the whole system panics.
In kernel.panic you can define the number of seconds the kernel wait before rebooting. For example:
sysctl vm.panic_on_oom=1 sysctl kernel.panic=2
Out-of-Memory Killer
If vm.panic_on_oom is set to 0 and a oom happens then OOM killer will be invoked. The OOM killer is encharged of killing a process that requires more memory that can be reserved. Its behaivor can be configured with the vm.oom_kill_allocating_task flag. According to the official documentation. It can take different values:
Value 0: the OOM killer will scan through the entire tasklist and select a task based on heuristics to kill. This normally selects a rogue memory-hogging task that frees up a large amount of memory when killed. The default value is 0.
Value non-zero: the OOM killer simply kills the task that triggered the out-of-memory condition. This avoids the expensive tasklist scan.
For example, make the OOM killer kills the process which triggered the out of memory:
sysctl vm.oom_kill_allocating_task=1
Configure your system
To make all these sysctl changes permanen you must edit the file /etc/sysctl.conf and set there the sysctl flags with their corresponding value. For example:
vm.swappiness=10 vm.min_free_kbytes=512000
More information about tunning system performance here: Performance_Tuning_Guide
Ulimit
Linux kernel provides some configuration about system related limits and maximums. These are like maximum memory used, maximum file handler count, maximum process count etc.
To see the limits configured in your system execute ulimit -a:
root@mypc:~# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63520 max locked memory (kbytes, -l) 16384 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 63520 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
If your system keeps freezing on high memory consumption it can be a good idea to limit the maximum amount of virtual memory available. Then when a process try to get more memory than the limit it will crash, but your computer will be safe.
You can do this with the -v option and give the number of kbytes to limit the virtual memory:
ulimit -v 15000000
If you want to make this changes permanent edit the file /etc/security/limits.conf and add the following line:
* hard as 15000000
ulimits can be use to limit per user, groups or process, more info can be found here.
Compressing RAM and SWAP
Zwap
Zswap is a Linux kernel feature providing a compressed write-back cache for swapped pages. Instead of moving memory pages to a swap device when they are to be swapped out, zswap performs their compression and then stores them into a memory pool dynamically allocated inside system’s RAM.
Zram
The Zram kernel module (previously called compcache) provides a compressed block device in RAM. If you use it as swap device, the RAM can hold much more information but uses more CPU. Still, it is much quicker than swapping to a hard drive. If a system often falls back to swap, this could improve responsiveness. Using zram is also a good way to reduce disk read/write cycles due to swap on SSDs.
How to configure zRam in Ubuntu:
In ubuntu there are some scripts which make it easy to configure the zRam kernel support. To use it, first install the zram-config package as root:
apt-get install zram-config
Now execute the init script for zram as root:
init-zram-swapping
This will create as many zram swap devices as cores has the CPU. By default zram will use half of the RAM memory to these devices.
You can check that this worked looking into the /proc/swaps file:
root@mypc:~# cat /proc/swaps Filename Type Size Used Priority /dev/sda5 partition 15624188 2372132 -1 /dev/zram0 partition 2040828 0 5 /dev/zram1 partition 2040828 0 5 /dev/zram2 partition 2040828 0 5 /dev/zram3 partition 2040828 0 5
To stop using zram execute:
end-zram-swapping
This will remove all the devices.
You can check the free RAM and swap with the command free.