Because a lot of people believe swapping = bad and that if you don't reduce swappiness, the system will swap when it really doesn't need to. Neither of those are really true. People associate swapping with times where their system is getting bogged down - however, it's mostly swapping because the system is getting bogged down, not the other way around. When the system swaps, Linux has already factored the performance cost in to its decision to swap, and decided that not doing so would likely have a greater penalty in system performance or stability.
The default setting has been arrived at after extensive testing on countless different hardware and software setups, being incredibly well tested by virtue of how many people use Linux and the variety of ways they use it. It wouldn't be adjustable if there weren't use cases for adjusting in response to particular needs, but in doing so it's important to consider the risk of unintended consequences and corner cases that weren't considered, which increase the more you alter the behavior from the defaults. Adjusting swappiness isn't a "simple fix" to all performance problems, but a compromise of many different facets.
If you want a simple fix, the simplest possible fixes is always to simply install more physical RAM, or purchase a system with more RAM. This solution is virtually guaranteed to have no unintended drawbacks.
How Linux uses RAM
Linux can use RAM for memory allocated by programs, or it can use it for mirroring the content of files on disk - whether that be code or data files open for reading or files recently read as "cache". Absent any shortage of available RAM, Linux will keep recently read or used file data in memory in case it is needed again, as there is no cost in doing so and it can potentially speed up the system if the same files are wanted to be read in the future. This leads to the typical situation where most RAM that was not allocated by programs will be utilized for caching files.
If your memory use increases to the point that you are getting low on available RAM, the Linux kernel has the ability to either discard some file-backed memory pages, reducing cache, or (assuming you have swap enabled) it also has the ability to swap some memory allocated by programs by swapping it out of physical RAM and onto the swap device. Exactly what it does is controlled by algorithms.
Eventually, if memory usage continues to rise and swap is filled up (or swap is disabled), further memory use is not possible and your system will reach a point where it can't satisfy a need to allocate more RAM, and it will need to crash or kill a running program to recover memory.
How Linux uses swap
To combat these problems, if you have swap enabled your system can re-allocate some seldom-used application memory to the swap device, freeing RAM. The additional RAM can prevent processes dying due to running out of memory, and can also leave more space for file-backed storage so reading from disk files can operate more smoothly.
To decide when swapping will be used, the system uses a complex algorithm that takes into account the relative cost of swapping unused program memory, in comparison to relinquishing file-backed memory (memory that mirrors the contents of files).
The "swappiness" tunable does not represent a threshold or a percentage of RAM, even though it has been misrepresented as such by many sources. It is a weighting that tells the system the cost of swapping relative to the cost of re-reading files from disk. For a few years now in Linux, "swappiness" is a value that can go up to 200. "0" is a special value that effectively disables swap unless it's a last resort. Otherwise, values 1 to 200 represent different relative balances between the cost of swapping vs re-reading files, where values over 100 should be used only when your swap device is significantly faster than the system drive.
Within the range of 1 to 100, 1 tells the system to heavily favor relinquishing file-backed memory whereas 100 tells the system to treat both options as equal in terms of cost, with values in between striking reasonable balances for systems where swapping is on the same speed device as your system drive. The algorithm deciding whether to swap still takes into account factors such how long ago the memory in question was last accessed and several other things. The default value now is 60 which is suitable for a range of hard drive technologies including SSDs. While lowering to around 40 may still make sense if you have a traditional HDD with slow access times compared to its sequential read time and increasing it to around 90 may make sense for a modern SSD with fast random access, the default of 60 is still a reasonable value in both these situations.
Letting your system swap when the system deems it necessary is overall a very good thing, even if you have a lot of RAM. Letting your system swap if it needs to gives you peace of mind that if you ever run into a low memory situation even temporarily (while running a short process that uses a lot of memory), your system has a second chance at keeping everything running. If you go so far as to disable swapping completely, then you risk processes being killed due to not being able to allocate memory.
What is happening when the system is bogged down and swapping heavily?
Swapping is a slow and costly operation, so the system avoids it unless it calculates that the trade-off in cache performance will make up for it overall, or if it's necessary to avoid killing processes.
A lot of the time people will look at their system that is thrashing the disk heavily and using a lot of swap space and blame swapping for it. That's the wrong approach to take. If swapping ever reaches this extreme, it means that swapping is your system's attempt to deal with low memory problems, not the cause of the problem, and that without swapping your running process will just randomly die.
What about desktop systems? Don't they require a different approach?
Users of a desktop system do indeed expect the system to "feel responsive" in response to user-initiated actions such as opening an application, which is the type of action that can sometimes trigger a swap due to the increase in memory required.
One way some people try to tweak this is to reduce the swappiness parameter which can increase the system's tolerance to applications using up memory and running low on cache space.
However, this is just shifting goalposts. The first application may now load without a swap operation, but it will leave less slack for the next application that loads. The same swapping may just occur later, when you next open an application instead. In the meantime, the system performance is lower overall due to the system purging file caches. Thus, any benefit from the reduced swappiness setting may be hard to measure, reducing swapping delay at some times but causing other slow performance at other times. Reducing swappiness to as low as 10 can leave much lower cache sizes and even have the potential to create a different type of disk thrashing where files the system wants to read keep being purged requiring re-reads.
Disabling swap completely should be avoided as you lose the added protection against out-of-memory conditions which can cause processes to crash or be killed.
The most effective remedy by far is to install more RAM if you can afford it.
Can swap be disabled on a system that has lots of RAM anyway?
If you have far more RAM than you're likely to need for applications, then you'll rarely need swap. Therefore, disabling swap probably won't make a difference in all usual circumstances. But if you have plenty of RAM, leaving swap enabled also won't have any penalty because the system doesn't swap when it doesn't need to.
The only situations in which it would make a difference would be in the unlikely situation the system finds itself running low on available memory, and it's in this type of situation where you would want swap most. So you can safely leave swap on its normal settings for added peace of mind without it ever having a negative effect when you have plenty of memory.
But how can swap speed up my system? Doesn't swapping slow things down?
The act of transferring data from RAM to swap can be a slow operation, but it's only taken when the kernel predicts the overall benefit as a result of keeping a reasonable cache and d size will outweigh this. If your system is getting really slow as a result of disk thrashing, swap is not causing it but only trying to alleviate it.
Once data is in swap, when does it come out again?
Any given part of memory will come back out of swap as soon as it's used - read from or written to. However, typically the memory that is swapped is memory that has not been accessed in a long time and is not expected to be needed soon.
Transferring data out of swap is assumed to be about as time-consuming as putting it in there. Your kernel won't remove data from it if it doesn't need to. While data is in swap and not being used, it leaves more memory for other things that are being used, and more system cache.
Other technologies you can use to alter your system swap behavior
zram is a method of having a compressed swap device in memory. This can be used as a way to avoid the relative slowness of reading and writing to disk or SSD storage, because writing to memory is much faster. For this increase in swap performance you are trading some CPU, because it needs to perform compression and decompression, and some physical memory space, because even though the zram device is compressed it still occupies some RAM, which then can't be recovered (see however "zram writeback" for a potential alleviation).
zswap is an alternative technology that creates an in-memory compressed write-back cache of a swap device. It gives the same type of trade-off of CPU and memory for the benefit of improved swap performance as zram. Unlike zram, zswap always requires a regular swap device to be configured as well (and this shouldn't be a zram device). The idea is that zswap will then decompress and page out memory to the backing swap device when its cache becomes full or in some cases if memory pages are incompressible.
These two technologies can help reduce the performance cost of swapping, suggesting that it may be appropriate to significantly increase Linux's swappiness setting. However, despite their significantly reduced cost they aren't without any cost; as discussed, there is a little cost to CPU and to the memory space that the compressed store occupies. Should you wish to increase the value, a wise approach would be to do so conservatively, and test representative workloads in a low memory situation and observe the results. To some this will be too much work, to which I'd suggest that staying with the default remains a very safe course of action even if some types of workload may be further improved with an adjustment. The default swappiness setting of 60 will still work, and will not completely prevent you from experiencing the benefits of the faster swapping.