89

I have seen in several site which recommend to reduce swappiness to 10-20 for better performance.

Is it a myth or not? Is this a general rule? I have a laptop with 4GB Ram and 128GB SSD hard, what value do you recommend for my swappiness?

Thanks.

Jorge Castro
  • 73,717

7 Answers7

128

Because a lot of people believe swapping = bad and that if you don't reduce swappiness, the system will swap when it really doesn't need to. Neither of those are really true. People associate swapping with times where their system is getting bogged down - however, it's mostly swapping because the system is getting bogged down, not the other way around. When the system swaps, Linux has already factored the performance cost in to its decision to swap, and decided that not doing so would likely have a greater penalty in system performance or stability.

The default setting has been arrived at after extensive testing on countless different hardware and software setups, being incredibly well tested by virtue of how many people use Linux and the variety of ways they use it. It wouldn't be adjustable if there weren't use cases for adjusting in response to particular needs, but in doing so it's important to consider the risk of unintended consequences and corner cases that weren't considered, which increase the more you alter the behavior from the defaults. Adjusting swappiness isn't a "simple fix" to all performance problems, but a compromise of many different facets.

If you want a simple fix, the simplest possible fixes is always to simply install more physical RAM, or purchase a system with more RAM. This solution is virtually guaranteed to have no unintended drawbacks.

How Linux uses RAM

Linux can use RAM for memory allocated by programs, or it can use it for mirroring the content of files on disk - whether that be code or data files open for reading or files recently read as "cache". Absent any shortage of available RAM, Linux will keep recently read or used file data in memory in case it is needed again, as there is no cost in doing so and it can potentially speed up the system if the same files are wanted to be read in the future. This leads to the typical situation where most RAM that was not allocated by programs will be utilized for caching files.

If your memory use increases to the point that you are getting low on available RAM, the Linux kernel has the ability to either discard some file-backed memory pages, reducing cache, or (assuming you have swap enabled) it also has the ability to swap some memory allocated by programs by swapping it out of physical RAM and onto the swap device. Exactly what it does is controlled by algorithms.

Eventually, if memory usage continues to rise and swap is filled up (or swap is disabled), further memory use is not possible and your system will reach a point where it can't satisfy a need to allocate more RAM, and it will need to crash or kill a running program to recover memory.

How Linux uses swap

To combat these problems, if you have swap enabled your system can re-allocate some seldom-used application memory to the swap device, freeing RAM. The additional RAM can prevent processes dying due to running out of memory, and can also leave more space for file-backed storage so reading from disk files can operate more smoothly.

To decide when swapping will be used, the system uses a complex algorithm that takes into account the relative cost of swapping unused program memory, in comparison to relinquishing file-backed memory (memory that mirrors the contents of files).

The "swappiness" tunable does not represent a threshold or a percentage of RAM, even though it has been misrepresented as such by many sources. It is a weighting that tells the system the cost of swapping relative to the cost of re-reading files from disk. For a few years now in Linux, "swappiness" is a value that can go up to 200. "0" is a special value that effectively disables swap unless it's a last resort. Otherwise, values 1 to 200 represent different relative balances between the cost of swapping vs re-reading files, where values over 100 should be used only when your swap device is significantly faster than the system drive.

Within the range of 1 to 100, 1 tells the system to heavily favor relinquishing file-backed memory whereas 100 tells the system to treat both options as equal in terms of cost, with values in between striking reasonable balances for systems where swapping is on the same speed device as your system drive. The algorithm deciding whether to swap still takes into account factors such how long ago the memory in question was last accessed and several other things. The default value now is 60 which is suitable for a range of hard drive technologies including SSDs. While lowering to around 40 may still make sense if you have a traditional HDD with slow access times compared to its sequential read time and increasing it to around 90 may make sense for a modern SSD with fast random access, the default of 60 is still a reasonable value in both these situations.

Letting your system swap when the system deems it necessary is overall a very good thing, even if you have a lot of RAM. Letting your system swap if it needs to gives you peace of mind that if you ever run into a low memory situation even temporarily (while running a short process that uses a lot of memory), your system has a second chance at keeping everything running. If you go so far as to disable swapping completely, then you risk processes being killed due to not being able to allocate memory.

What is happening when the system is bogged down and swapping heavily?

Swapping is a slow and costly operation, so the system avoids it unless it calculates that the trade-off in cache performance will make up for it overall, or if it's necessary to avoid killing processes.

A lot of the time people will look at their system that is thrashing the disk heavily and using a lot of swap space and blame swapping for it. That's the wrong approach to take. If swapping ever reaches this extreme, it means that swapping is your system's attempt to deal with low memory problems, not the cause of the problem, and that without swapping your running process will just randomly die.

What about desktop systems? Don't they require a different approach?

Users of a desktop system do indeed expect the system to "feel responsive" in response to user-initiated actions such as opening an application, which is the type of action that can sometimes trigger a swap due to the increase in memory required.

One way some people try to tweak this is to reduce the swappiness parameter which can increase the system's tolerance to applications using up memory and running low on cache space.

However, this is just shifting goalposts. The first application may now load without a swap operation, but it will leave less slack for the next application that loads. The same swapping may just occur later, when you next open an application instead. In the meantime, the system performance is lower overall due to the system purging file caches. Thus, any benefit from the reduced swappiness setting may be hard to measure, reducing swapping delay at some times but causing other slow performance at other times. Reducing swappiness to as low as 10 can leave much lower cache sizes and even have the potential to create a different type of disk thrashing where files the system wants to read keep being purged requiring re-reads.

Disabling swap completely should be avoided as you lose the added protection against out-of-memory conditions which can cause processes to crash or be killed.

The most effective remedy by far is to install more RAM if you can afford it.

Can swap be disabled on a system that has lots of RAM anyway?

If you have far more RAM than you're likely to need for applications, then you'll rarely need swap. Therefore, disabling swap probably won't make a difference in all usual circumstances. But if you have plenty of RAM, leaving swap enabled also won't have any penalty because the system doesn't swap when it doesn't need to.

The only situations in which it would make a difference would be in the unlikely situation the system finds itself running low on available memory, and it's in this type of situation where you would want swap most. So you can safely leave swap on its normal settings for added peace of mind without it ever having a negative effect when you have plenty of memory.

But how can swap speed up my system? Doesn't swapping slow things down?

The act of transferring data from RAM to swap can be a slow operation, but it's only taken when the kernel predicts the overall benefit as a result of keeping a reasonable cache and d size will outweigh this. If your system is getting really slow as a result of disk thrashing, swap is not causing it but only trying to alleviate it.

Once data is in swap, when does it come out again?

Any given part of memory will come back out of swap as soon as it's used - read from or written to. However, typically the memory that is swapped is memory that has not been accessed in a long time and is not expected to be needed soon.

Transferring data out of swap is assumed to be about as time-consuming as putting it in there. Your kernel won't remove data from it if it doesn't need to. While data is in swap and not being used, it leaves more memory for other things that are being used, and more system cache.

Other technologies you can use to alter your system swap behavior

zram is a method of having a compressed swap device in memory. This can be used as a way to avoid the relative slowness of reading and writing to disk or SSD storage, because writing to memory is much faster. For this increase in swap performance you are trading some CPU, because it needs to perform compression and decompression, and some physical memory space, because even though the zram device is compressed it still occupies some RAM, which then can't be recovered (see however "zram writeback" for a potential alleviation).

zswap is an alternative technology that creates an in-memory compressed write-back cache of a swap device. It gives the same type of trade-off of CPU and memory for the benefit of improved swap performance as zram. Unlike zram, zswap always requires a regular swap device to be configured as well (and this shouldn't be a zram device). The idea is that zswap will then decompress and page out memory to the backing swap device when its cache becomes full or in some cases if memory pages are incompressible.

These two technologies can help reduce the performance cost of swapping, suggesting that it may be appropriate to significantly increase Linux's swappiness setting. However, despite their significantly reduced cost they aren't without any cost; as discussed, there is a little cost to CPU and to the memory space that the compressed store occupies. Should you wish to increase the value, a wise approach would be to do so conservatively, and test representative workloads in a low memory situation and observe the results. To some this will be too much work, to which I'd suggest that staying with the default remains a very safe course of action even if some types of workload may be further improved with an adjustment. The default swappiness setting of 60 will still work, and will not completely prevent you from experiencing the benefits of the faster swapping.

thomasrutter
  • 37,804
24

On a usual desktop, you have 4-5 active tasks that consume 50-60% of memory. If you set swappiness to 60, then about 1/4-1/3 of the ACTIVE task pages will be swapped out. That means, for every task change, for every new tab you opened, for every JS execution, there will be a swapping process.

The solution is to set swappiness to 10. By practical observations, this causes the system to give up disk io cache (that plays little to no role on desktop, as read/write cache is virtually not used at all. Unless you constantly copying LARGE files) instead of pushing anything into swap. In practice, that means system will refuse to swap pages, cutting io cache instead, unless it hits 90% used memory. And that in turn means a smooth, swapless, fast desktop experience.

On the file server, however, I would set swappiness to 60 or even more, because server does not have huge active foreground tasks that must be kept in the memory as whole, but rather a lot of smaller processes that are either working or sleeping, and not really changing their state immediately. Instead, server often serves (pardon) the exact same data to clients, making disk io caches much more valueable. So on the server, it is much better to swap out the sleeping processes, freeing memory space for disk cache requests.

On desktops, however, this exact setting leads to swapping out blocks of memory of REAL applications, that near constantly modify or access this data.

Oddly enough, browsers often reserve large chunks of memory, that they constantly modify. When such chunks are swapped out, it takes a while if they are requested back - and at the same time, browser goes forth updating its caches. Which causes huge latencies. In practice, you will be sitting 2 minutes waiting for the single web page in a new tab to load.

Desktop does not really care about disk io, because desktop rarely reads and writes cacheable repeating big portions of data. Cutting on disk io in order to just prevent swaping so much as possible is much more favorible for desktop, than to have 30% of memory reserved for disk cache with 30% of RAM (full of blocks belonging to actively used applications) swapped out.

Just launch htop, open a browser, GIMP, LibreOffice - load few documents there and then browse for several hours. Its really that easy.

terdon
  • 104,119
Linux dude
  • 257
  • 2
  • 2
13

If you run a Java server on your Linux system you should really consider reducing swappiness by much from the default value of 60. So 20 is indeed a good start. Swapping is a killer for a garbage collecting process because collections each time need to touch large parts of the process memory. The OS does not have the means to detect such processes and get things right for them. It is best practice to avoid swapping as much as you possibly can for productive application servers.

Andreas
  • 139
7

I would suggest doing some experiments whilst having system monitor open to see exactly how much load your machine is under, I am also running with 4GB of memory and a 128GB SSD so changed the swappiness value to 10 which not only improved performance whilst under load but as a bonus will also increase the life of the SSD drive as it will suffer less writes.

For a simple video tutorial on how to do this with a full explanation see the YouTube video below

http://youtu.be/i6WihsFKJ7Q

Tech-Compass
  • 71
  • 1
  • 2
5

I want to add some perspective from a Big Data Performance engineer to give others more background on 2017 technology.

My personal experience is that while I have typically disabled swapping to guarantee that my systems are running at max speed, on my workstation for a specific problem, I have found that swappiness of 1 and 10 leads to freezing (forever) and long pauses. Swappiness of 80 for this particular application leads to much better performance and shorter pauses than the default (60). Note that I had 8GB RAM and 4x 256GB of swap backed by 1 HDD. I would normally state precise statistics seen in my benchmarks and the full hardware specs, but I haven't done any yet and it's a recent low-end desktop that is not important here.

Back at my former company, the reason we did not enable swappiness on Spark servers with [500GB to 4TB] x [10-100] nodes is that we saw poor performance as a sign to redesign the data pipeline and data structures in a more efficient manner. We also did not want to benchmark the HDDs/SSDs. Also, swapping that much RAM would need 10-30 disks per node with parallel writes to minimize disk access time.

Today, 20 years ago and 20 years in the future, the case will still remain that some problems are too large for the RAM. With infinite time and money, we can buy/lease more hardware or redesign any process to get the performance to a desirable level. Swapping is just a hack to allow us to ignore the real problem (we don't have enough ram and we don't want to spend more money).

For those that think higher swappiness is a bad advice, here is a little perspective. In the past, HDs had just a few kb of cache if any. The interface was IDE/Parallel ATA. The CPU bus was also much slower along with RAM and many other things. In short, systems were very slow (relative to today) in every way. A couple years ago, HDDs used SATA3. Today, they use the NVMe protocol, which has significant latency improvements. HDs have many MB of cache. And the most interesting part is when you use a modern SSD (much more stable read/write endurance and perf) with NVMe or PCIe as your swap storage. It's the best compromise between cost and performance. Please do not try this with cheap or old SSDs.

Swap+SSDs! With high-performance volatile storage, I would highly recommend experimenting with a high swappiness value. It mainly depends on the memory access patterns (randomly accessing all memory vs rarely accessing most), memory usage, if the disk bandwidth is already saturated, and the actual cost of thrashing.

ldmtwo
  • 150
3

A personal anecdote. I didn't know about this swappiness and in hindsight it might have fixed my problem. My system is old and RAM was 4GB.

I upgraded my Linux OS to the next latest long term support version. That version was "passively" using more RAM. This made my system use more swap. The system started boggling down because the swap is on HDD.

Looking at the stats the RAM and the swap combined was not greater than my max RAM. The problem was partially as 'Linux dude' mentioned that browsers often reserve large chunks of memory, that they constantly modify. So I was using Firefox (YouTube particularly is heavy) and due to that large chunks were going into swap but were actually needed.

I ended up getting more RAM which did solve my problem but it might have been possible to postpone the RAM buying if I tried putting swappiness at lower value. I don't regret buying the RAM it was a good upgrade but not every one can make an upgrade.

h3dkandi
  • 191
0

It could be that a lot of the perceived swapping behaviour on startup or on opening programs is linux reading configuration files etc. from disk. So it maybe best to look using the system monitor program before assuming that the hard drive access is due to swapping.

Seth
  • 59,332
user1740850
  • 139
  • 3