73

It happens pretty often to me when I am compiling software in the background and suddenly everything starts to slow down and eventually freeze up [if I do nothing], as I have run out of both RAM and swap space.

This question assumes that I have enough time and resources to open up Gnome Terminal, search through my history, and execute one sudo command.

What command can save me from having to do a hard reboot, or any reboot at all?

Yaron
  • 13,453
Anon
  • 12,339

12 Answers12

84

In my experience Firefox and Chrome use more RAM than my first 7 computers combined. Probably more than that but I'm getting away from my point. The very first thing you should do is close your browser. A command?

killall -9 firefox google-chrome google-chrome-stable chromium-browser

I've tied the most popular browsers together into one command there but obviously if you're running something else (or know you aren't using one of these) just modify the command. The killall -9 ... is the important bit. People do get iffy about SIGKILL (signal number 9) but browsers are extremely resilient. More than that, terminating slowly via SIGTERM will mean the browser does a load of cleanup rubbish —which requires a burst of additional RAM— and that's something you can't afford in this situation.

If you can't get that into an already-running terminal or an Alt+F2 dialogue, consider switching to a TTY. Control + Alt + F2 will get you to TTY2 which should allow you to login (though it might be slow) and should even let you use something like htop to debug the issue. I don't think I've ever run out of RAM to the point I couldn't get htop up.

The long term solution involves either buying more RAM, renting it via a remote computer, or not doing what you're currently doing. I'll leave the intricate economic arguments up to you but generally speaking, RAM is cheap to buy, but if you only need a burst amount, a VPS server billed per minute, or hour is a fine choice.

Oli
  • 299,380
67

On a system with the Magic System Request Key enabled, pressing Alt + System Request + f (if not marked on your keyboard, System Request is often on the Print Screen key) will manually invoke the kernel's out of memory killer (oomkiller), which tries to pick the worst offending process for memory usage and kill it. You can do this if you have perhaps less time than you've described and the system is just about to start (or maybe has already started) thrashing - in which case you probably don't care exactly what gets killed, just that you end up with a usable system. Sometimes this can end up killing X, but most of the time these days it's a lot better at picking a bad process than it used to be.

Muzer
  • 818
20

Contrary to other answers, I suggest that you disable swap while you are doing this. While swap keeps your system running in a predictable manner, and is often used to increase the throughput of applications accessing the disk (by evicting unused pages to allow room for the disk cache), in this case it sounds like your system is being slowed down to unusable levels because too much actively used memory is being forcibly evicted to swap.

I would recommend disabling swap altogether while doing this task, so that the out-of-memory killer will act as soon as the RAM fills up.

Alternative solutions:

  • Increase the read speed of swap by putting your swap partition in RAID1
    • Or RAID0 if you're feeling risky but that will bring down a large number of running programs if any of your disks malfunction.
  • Decrease the number of concurrent build jobs ("more cores = more speed", we all say, forgetting that it takes a linear toll on RAM)
  • This could go both ways, but try enabling zswap in the kernel. This compresses pages before they are sent to swap, which may provide just enough wiggle room to speed your machine up. On the other hand, it could just end up being a hindrance with the extra compression/decompression it does.
  • Turn down optimisations or use a different compiler. Optimising code can sometimes take up several gigabytes of memory. If you have LTO turned on, you're going to use a lot of RAM at the link stage too. If all else fails, you can try compiling your project with a lighter-weight compiler (e.g. tcc), at the expense of a slight runtime performance hit to the compiled product. (This is usually acceptable if you're doing this for development/debugging purposes.)
14

You can use the following command (repeatedly if needed) to kill the process using the most RAM on your system:

ps -eo pid --no-headers --sort=-%mem | head -1 | xargs kill -9

With:

  • ps -eo pid --no-headers --sort=-%mem: display the process ids of all running processes, sorted by memory usage
  • head -1: only keep the first line (process using the most memory)
  • xargs kill -9: kill the process

Edit after Dmitry's accurate comment:

This is a quick and dirty solution that should be executed when there are no sensitive tasks running (tasks that you don't want to kill -9).

Gohu
  • 386
11

Before running your resource consuming commands, you could also use the setrlimit(2) system call, probably with the ulimit builtin of your bash shell (or the limit builtin in zsh) notably with -v for RLIMIT_AS. Then too big virtual address space consumption (e.g. with mmap(2) or sbrk(2) used by malloc(3)) will fail (with errno(3) being ENOMEM).

Then they (i.e. the hungry processes in your shell, after you typed ulimit) would be terminated before freezing your system.

Read also Linux Ate My RAM and consider disabling memory overcommitment (by running the command echo 0 > /proc/sys/vm/overcommit_memory as root, see proc(5)...).

10

this happens pretty often to me when I am compiling software in the background

In that case, something like "killall -9 make" (or whatever you are using to manage your compilation, if not make). This will stop the compilation proceeding further, will SIGHUP all the compiler processes launched from it (hopefully causing them to stop as well) and, as a bonus, doesn't need sudo assuming you're compiling as the same user you're logged in as. And since it kills the actual cause of your problem instead of your web browser, X session or some process at random, it won't interfere with whatever else you were doing on the system at the time.

9

Create some more swap for yourself.

The following will add 8G of swap:

dd if=/dev/zero of=/root/moreswap bs=1M count=8192
mkswap /root/moreswap
swapon /root/moreswap

It will still be slow (you are swapping) but you shouldn't actually run out. Modern versions of Linux can swap to files. About the only use for a swap partition these days is for hibernating your laptop.

Eliah Kagan
  • 119,640
7

One way to get a chunk of free RAM on a short notice is to use zram, which creates a compressed RAM disk and swaps there. With any half-decent CPU, this is much faster than regular swap, and the compression rates are pretty high with many modern RAM hogs like web browsers.

Assuming you have zram installed and configured, all you have to do to is run

sudo service zramswap start
Dmitry Grigoryev
  • 1,960
  • 14
  • 23
3

Another things that one could do is to free up memory page cache via this command:

echo 3 | sudo tee /proc/sys/vm/drop_caches

From kernel.org documentation (emphasis added):

drop_caches

Writing to this will cause the kernel to drop clean caches, as well as reclaimable slab objects like dentries and inodes. Once dropped, their memory becomes free.

To free pagecache: echo 1 > /proc/sys/vm/drop_caches To free reclaimable slab objects (includes dentries and inodes): echo 2 > /proc/sys/vm/drop_caches To free slab objects and pagecache: echo 3 > /proc/sys/vm/drop_caches

This is a non-destructive operation and will not free any dirty objects. To increase the number of objects freed by this operation, the user may run `sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the number of dirty objects on the system and create more candidates to be dropped.

3

sudo swapoff -a will disable the swap, making the kernel automatically kill the process with the highest score if the system runs out of memory. I use this if I know I'll be running something RAM-heavy that I'd rather kill if it goes out of control than let it go into swap and get stuck forever. Use sudo swapon -a to re-enable it afterwards.

Later, you may want to take a look at your swap settings. Sounds like your swap is on the same disk as the root partition, which would slow down your system when you hit swap, so avoid that if you can. Also, in my opinion, modern systems often get configured with too much swap. 32GiB RAM usually means 32GiB swap is allocated by default, as if you really want to put 32GiB into your swap space.

sudo
  • 131
  • 7
1

Recently I found a solution to my problem.

Since the Linux OOM killer isn't able to do its job properly, I started using a userspace OOM Killer: earlyoom. It's written in C, fairly configurable and it's working like a charm for me.

Oleg Abrazhaev
  • 146
  • 1
  • 7
1

You said "compiling in the background". What are you doing in the foreground? If its you are developing with Eclipse or other resource heavy IDE, check if everything is properly terminated in the console.

Development environments often allow to start multiple processes under development, these may stay hanging also after you are no longer interested in them (in debugger, or just not properly finished). If the developer does not pay attention, tens of forgotten processes may accumulate during the day, using multiple gigabytes together.

Check if everything that should be terminated in IDE is terminated.

h22
  • 196