Readit News logoReadit News
mwpmaybe · 9 years ago
My personal rules of thumb for Linux systems. YMMV.

* If you need a low-latency server or workstation and all of your processes are killable (i.e. they can be easily/automatically restarted without data loss): disable swap.

* If you need a low-latency server or workstation and some of your processes are not killable (e.g. databases): enable swap and set vm.swappiness to 0.

* SSD-backed desktops and other servers and workstations: enable swap and set vm.swappiness to 1 (for NAND flash longevity).

* Disk-backed desktops and other servers and workstations: accept the system/distro defaults, typically swap enabled with vm.swappiness set to 60. You can and likely should lower vm.swappiness to 10 or so if you have a ton of RAM relative to your workload.

* If your server or workstation has a mix of killable and non-killable processes, use oom_score_adj to protect the non-killable processes.

* Monitor systems for swap (page-out) activity.

gregmac · 9 years ago
For the curious (I was):

* vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition, when free memory will be below vm.min_free_kbytes limit.

* vm.swappiness = 1 Minimum amount of swapping without disabling it entirely.

* vm.swappiness = 60 The default value.

* vm.swappiness = 100 The kernel will swap aggressively.

https://en.wikipedia.org/wiki/Swappiness

Jedd · 9 years ago
> vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition, when free memory will be below vm.min_free_kbytes limit.

This is not the case.

It used to be the case, but changed in kernel version 3.5-rc1 (2012 ish)

There was a discussion about this on HN a few weeks ago: https://news.ycombinator.com/item?id=13511086

And there's a blog post on the percona website about how this rather bizarre change bit them: https://www.percona.com/blog/2014/04/28/oom-relation-vm-swap...

I call it bizarre because (as I wrote in that other HN thread) a) it changed the behaviour of lots of production systems in a surprising way, and b) if you want to ensure your processes never swap you already had the option to not have a swap file or partition.

nisa · 9 years ago
If you are on the experimental side:

There is also zram (just swap in memory lz4/lzo compressed) and zswap (compressed cache in memory for swap pages before hitting disk) that needs a real swap device but compresses pages beforehand.

I run zswap on my Desktop and on a few servers and it gives you some more time before the oom killer comes and the system feels a bit longer responsive.

zram is a nice idea but quite a beast in practice (at least on MIPS with 32mb RAM) sys constantly at 100% if you need it and other quirks. Maybe it got better or I did something wrong.

But if you need an in-memory compressed block-device it's pretty great - you can just format it with ext4 and have a lz4 compressed tmpfs.

spangry · 9 years ago
From what I understand, zram results in LRU cache inversion whereas zswap does not (as it intercepts calls to the kernel frontswap API). Although, if you have a workload that would benefit from MRU then I guess this is just a bonus :)

Zswap maintains default kernel memory allocation behaviour, with the tradeoff that it needs a backing swap device to push old pages out to (which is why zram tends to be used more often in embedded devices that only have a single volatile memory store, of devices with limited non-volatile storage).

mxvzr · 9 years ago
I use zram rather than a regular swap partition on all my laptops (because I'd rather not swap on SSDs) and desktops (same reason and/or there is an absurd amount of RAM to begin with). I also hear that most chromebooks use zram too (you really don't want to be swapping on that eMMC memory).

I set it up with one zram device per CPU core for a total space of ~20% available RAM.

No performance issues w/ zram so far so I haven't felt the need to change the compression algorithm.

isr · 9 years ago
zram has worked fine on my chromebooks. This is with running multiple chroots - and I have hit the oom killer a number of times (when even zram swap wasn't enough).

Until you actually run out of memory, zram seems very much a set-and-forget type of thing. No babysitting required.

tl;dr: it does what it says on the tin, and ... with minimal cpu impact.

nerdponx · 9 years ago
First I've heard of either. How would I set these up?
sbuttgereit · 9 years ago
Wasn't there a Debian/Ubuntu thing recently where vm.swappiness = 0 had a behavior change which increased the number of incidents of the OOM killer stomping on things like database processes?

(Maybe it wasn't so new... https://www.percona.com/blog/2014/04/28/oom-relation-vm-swap...)

mwpmaybe · 9 years ago
Thank you for sharing this. There's an interesting conversation thread in the comments on that post. It's a little over my head, but my takeaway is that with the kernel change, in an OOM event, MySQL is unable to be swapped out due to the type(s) of memory pages it's using, so the kernel is forced to kill it (or itself). In practice, it's relatively straightforward to tune MySQL/MariaDB for a certain memory allocation, and if it's on a shared host, oom_score_adj can be set to protect it.
kalleboo · 9 years ago
> * SSD-backed desktops and other servers and workstations: enable swap and set vm.swappiness to 1 (for NAND flash longevity).

Is this that big of a worry? I have a 5-year old SSD in my daily driver laptop, on OS X which loooves to swap out anything it can to gain memory for disk cache, and I'm still barely 15% into the SSD wearout.

nerdponx · 9 years ago
How big is the OSX/macOS swap? It's a file and not a partition, right?
mwpmaybe · 9 years ago
Nope, just a rule of thumb. :-)
Animats · 9 years ago
Swapping should have disappeared years ago. At best, it gives the effect of twice as much memory, in exchange for much slower speed. It was invented when memory cost a million dollars a megabyte. Costs have declined since then. How much does doubling the memory cost today?

What seems to keep swap alive is that asking for more memory ("malloc") is a request that can't be refused. Very few application programs handle an out of memory condition well. Many modern languages don't handle it at all. Nor is it customary to check for a "memory tight" condition and have programs restrain themselves, perhaps by starting fewer tasks in parallel, opening fewer connections, keeping fewer browser tabs in memory, or something similar.

I've used QNX, the real time OS, as a desktop system. It doesn't swap. This make for very consistent performance. Real-time programs are usually written to be aware of their memory limits.

Most mobile devices don't swap. So, in that sense, swapping is on the way out.

AnthonyMouse · 9 years ago
> Nor is it customary to check for a "memory tight" condition and have programs restrain themselves, perhaps by starting fewer tasks in parallel, opening fewer connections, keeping fewer browser tabs in memory, or something similar.

These aren't mutually exclusive and are actually complementary with swap.

If you have more than enough memory then swap is unused and therefore harmless. The question is, what do you do when you run out? Making the system run slower is almost always better than killing processes at random.

And it gives processes more time to react to a low memory notification before low turns into none and the killing begins, because it's fine for "low memory" to mean low physical memory rather than low virtual memory.

It also does the same thing for the user. "Hmm, my system is running slow, maybe I should close some of these 917 browser tabs" is clearly better than having the OS kill the browser and then kill it again if you try to restore the previous session.

jstimpfle · 9 years ago
> Making the system run slower is almost always better than killing processes at random.

In practice, heavy swapping (forth and back) makes it impossible to even kill the culprit manually (because I can't open an xterm or whatever). While there is often no benefit to have the processes continue running that slow.

Also, idealistically programs should be written with the assumption that the machine could go down at any instant. Having a few more cases where the program is killed will have the effect that the program is better tested and debugged.

qznc · 9 years ago
I cannot remember a single occasion, where my desktop recovered when it started swapping. Always, the whole system locks up and I need to reboot. Thus, better kill some random processes instead of all of them.
thatcks · 9 years ago
Swap space is only partially related to virtual memory overcommit, and virtual memory overcommit is extremely common and almost unavoidable on most Unix machines. Part of this is a product of a deliberate trade-off in libraries between virtual address space and speed (for example, internally rounding up memory allocation sizes to powers of two), and part of this is due to Unix features that mean a process's theoretical peak RAM usage is often much higher than it will ever be in reality.

(For example, if a process forks, a great deal of memory is shared between the parent and child. In theory one process could dirty all of their writeable pages, forcing the kernel to allocate a second copy of each page. In practice, almost no process that forks will do that and reserving RAM (or swap) for that eventuality would require you to run significantly oversized systems.)

euyyn · 9 years ago
Plus mobile apps do get, and usually handle, a low-memory notification from the OS.
RubenSandwich · 9 years ago
On iOS too many low memory warning in a set amount of time, Apple won't tell developers how many in what time frame in order to prevent them from gaming the system, will result in your app getting killed.
Gaelan · 9 years ago
Until Apple stops soldering on memory, swap will still be alive on the desktop.
Animats · 9 years ago
Years ago, about 80% of desktop machines were never opened during their life. It's probably higher today.
Spooky23 · 9 years ago
... for a small fraction of users.
dredmorbius · 9 years ago
Memory allocation is a non-market operation on (most? all?) operating systems. There's effectively no cost to processes allocating memory, and a fair cost to them not doing so.

I'm not sure that turning this into a market-analagous operation (bidding some ... other scarce resource -- say, killability?) might make the situation better or worse. And the problem ultimately resides with developers. But as a thought experiment this might be an interesting place to go.

cmrx64 · 9 years ago
This idea was implemented in EROS, and we've been exploring it for Robigalia as well. Storage is a finite resource which can be transferred between processes, including an "auction" mechanism which allows two processes to examine a trade before agreeing to it.
nerdponx · 9 years ago
Doesn't this already exist for processor scheduling?
scottlamb · 9 years ago
I hate swap. My experience with it is that once a disk-backed machine (as opposed to SSD) has started swapping, it's essentially unusable until you manually force all anonymous pages to be paged in by turning off swap ("sudo swapoff -a" on Linux) or reboot.

My hunch is that the OS is swapping stuff back in stupidly. Once memory is available, I'd like it to page everything back proactively, preferring stuff from swap and then from file-backed mmaps. But instead it seems to be purely reactive, each major page fault requiring a disk seek to page in what's needed with little if any readahead. Basically the whole VM space remains a minefield until you stumble over and detonate each mine in your normal operation. Much better to reboot and have a usable system again.

On my Linux systems, I've turned off swap.

On OS X...last I checked, I wasn't able to find a way to do this. I'd like to turn off swap entirely, or failing that, have some equivalent way to force all of swap to be paged in now so I don't have to reboot when I hit swap. Anyone know of a way?

outworlder · 9 years ago
> My experience with it is that once a disk-backed machine (as opposed to SSD) has started swapping, it's essentially unusable until you manually force all anonymous pages to be paged in by turning off swap ("sudo swapoff -a" on Linux) or reboot.

That depends. If your workload exceeds the amount of available memory, you will start "thrashing" the disk and that can make a system un-responsive.

If you happen to launch a large application, or start working with a big file, unused pages will be evicted to disk to make room and, after some slowdown, the system should become perfectly usable again. YMMV

On OSX, I don't know a way, but I can't recall the last time I had to reboot due to RAM/swap issues, even when I was developing apps on a 4GB Macbook Air. I guess memory compression, which is enabled by default, helps here. Most OSX systems have very fast SSDs as well.

scottlamb · 9 years ago
> If you happen to launch a large application, or start working with a big file, unused pages will be evicted to disk to make room and, after some slowdown, the system should become perfectly usable again. YMMV

What is an unused page? One that the foreground, memory-hungry application doesn't need? Okay, fine, but what happens when you switch back to some other application? My experience is that it needs the RAM that was paged out, and it doesn't get paged back in all at once. Every time you hit some 4 KiB of memory that happens to be paged out, you wait another 10 ms. I don't know how much beyond the 4 KiB gets paged in at the same time. Worst-case, there's no read-ahead at all. Let's say the application is using 1 GiB of RAM. Then this can happen 262,144 times, which means 44 minutes of waiting in small bursts as you're trying to use it, rather than the 10 seconds (at 100 MB/s) it'd take to read it all in one go. That's what I mean when I say the machine is unusable.

tluyben2 · 9 years ago
On my 2015 MBP with 8GB & SSD, I am often stuck for 10-15 minutes unable to do anything while thrashing. And I am someone who has Activity Monitor handy. I do not have this on my much older and weaker Ubuntu X220s doing the same type of development. Not sure why that is.
bluedino · 9 years ago
Usually when people say 'swapping' they mean page faulting. It's nothing more than a slight annoyance on a single-user machine if you swap for 10 seconds, but on a busy server you are dead in the water.
zzzcpan · 9 years ago
"My experience with it is that once a disk-backed machine (as opposed to SSD) has started swapping, it's essentially unusable"

The OS should start swapping very early to avoid bursts of disk I/O and rendering the system unusable. On linux this is somewhat configurable, even if not user friendly, but a combination of swappiness and vfs_cache_pressure could turn it into a usable machine, taking care of inefficient memory usage, memory leaks, unnecessary vfs cache, etc.

scottlamb · 9 years ago
> The OS should start swapping very early to avoid bursts of disk I/O and rendering the system unusable.

I think you're talking about the I/O of paging dirty things out, but I'm talking about the fact that some memory location is no longer present in RAM, so accessing it will take 10 ms or more to page in.

The system is not only useless while actively swapping. It's useless after it has ever swapped, and you can only recover by disabling swap ("sudo swapoff -a") or rebooting.

calpaterson · 9 years ago
Contrarian anecdote: I've recovered from swapping a few times on a desktop machine with an HDD (typical scenario: an ON clause was omitted from a join and postgres is doing a cartesian join between two tables). I didn't find things unable instantly, I was able to recover by SIGTERMing the relevant process and then running `swapoff --all` while maybe going for a tea break. YMMV.
dormento · 9 years ago
On OSX it used to be that you had to disable the pager daemon, then remove the swap file. Probably need to disengage system protection for it to work though.

sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist

Then:

rm /var/vm/swapfile*

Disclaimer: haven't tried doing it this way on Sierra.

benibela · 9 years ago
Something seems to be seriously wrong with the swap implementation on modern systems.

20 years ago on Windows 98 it just started swapping, but it was no big deal. If something became too slow to be usable, you could just press ctrl+alt+del and kill that swapped program and everything worked fine afterwards.

On the other hand, my modern linux laptop, it starts swapping, and it swaps and swaps and you can do nothing, not even move the mouse, till 30 minutes later something crashes.

bsdetector · 9 years ago
> on Windows 98 it just started swapping, but it was no big deal.

At that time, swapping out a 4k page was a significant part of memory: 4k of 16 MiB is 1/4096 of memory. Each swap gets back a lot of memory the program needs. Now the swap is still 4k pages, but memory has expanded by a thousand fold. Basically swap is a thousand times worse today than it was in the time of Windows 98.

For harddrives swap isn't used now to expand memory, it's used to remove initialization code and other 'dead' memory. Swap should be set to only a tiny fraction of the memory size for this reason, to prevent it from being used to handle actual out-of-memory conditions. But realistically for most users it's not even worth enabling at all because of the occasional memory that needs to be swapped in from disk.

For SSDs the seek speed has improved to match the extra memory so swap can still be used like in the old days to expand the effective memory size. But memory is so large a swap file that's a fraction of memory size to offload 'dead' memory is enough unless there's a specific reason to actually use swap for out-of-memory.

throwawayish · 9 years ago
I have been using various operating systems for a while.

I feel like Linux has, in general, from a UX point of view, the worst behaviour when swapping and the worst behaviour in general under memory pressure.

I feel like it has gotten worse over time, which might not be just the kernel but the general desktop ecosystem. If you require much more memory to move the mouse or show the task manager equivalent, then the system will be much less responsive when it thrashes itself.

Honestly, I'ld much rather have Linux just crash and reboot, that'd be faster than it's thrashing-tantrums.

Luckily, there's earlyoom, which just rampages the town quickly if memory pressure approaches. Like a reboot (ie. damage was done), just faster.

In any case, it makes me sad (in a bad way) to see how bad the state of things is when it comes to the basics of computing, like managing memory.

tluyben2 · 9 years ago
Not an excuse for bad implementations, but since I run i3wm, my feelings of happiness increased rapidly. To such an extend that I do not want to ever run anything else; stability, speed, memory use... Solves (for me) the issues you have.
TylerE · 9 years ago
That's what happens when you run everything through 100 layers of abstraction. Windows, for better or for worse, runs most things closer to the metal.
Asooka · 9 years ago
Because Windows 98 always kept enough resources available to show you the c-a-d dialog. On Linux, however, there is no "the shell must remain interactive at all times" requirement, so a daemon that gobbles memory and your rescue shell have the exact same priority. Modern Windows even has a graphics card watchdog and if any application issues a command to the GPU that takes too long, it's suspended and the user is asked if it should be killed. Probably not what you want on an HPC that does deep learning, but exactly what you want on an interactive desktop.

I suppose it might be possible to whip something up with cgroups and policy that will keep the VT, bash, X and a few select programs always resident in memory and give them ultimate I/O priority, but I haven't tried.

plorkyeran · 9 years ago
This is the exact opposite of my experience. Back in the Windows 9x days it was a fairly routine experience for the system to soft-lock with the HD grinding away and I'd sometimes end up just hard rebooting the computer after waiting a few minutes for the ctrl-alt-delete dialog to appear. On macOS with a SSD I don't even notice when my system is swapping heavily.
thiagobbt · 9 years ago
Isn't this related to this change on kernel 4.10? https://kernelnewbies.org/Linux_4.10#head-f6ecae920c0660b7f4...
ChuckMcM · 9 years ago
Possibly, however since the writeback behavior is configurable I expect you could test that thesis by changing the aggressiveness of the writeback draining.
rootbear · 9 years ago
Could this be a reflection of the increasing gulf between RAM speed and HD speed? Even with NVMe drives, which one probably shouldn't be swapping to anyway, RAM is orders of magnitude faster.
curlypaul924 · 9 years ago
I think, among other things, it has to do with the size of the swap space relative to the speed of the swap device. IME high disk i/o combined with large swap space means swap never fills up and the OOM killer doesn't kick in. On systems with less RAM and swap, OOM conditions were hit much sooner, even with slower disks.

Default settings for dirty ratio and dirty background ratio exacerbate the issue: more data is held onto before it is written, and once the background ratio is hit, any application writing to disk will block.

Retric · 9 years ago
With SSD's disk is not that slow.
tedunangst · 9 years ago
0. Possibly not true in all cases. 1. Modern systems are much more aggressive about enormous disk caches, which can ironically lead to io storms when it swaps out your application to buffer writes, then has to flush the cache to swap the app back in. 2. Difference in working set size and number of background programs waking up.
digi_owl · 9 years ago
I think thats more related to Linux and its prioritization of IO than anything else. Note that the latest kernel release 4.10) contains an IO throttle that should improve this experience.

https://kernelnewbies.org/Linux_4.10#head-f6ecae920c0660b7f4...

leni536 · 9 years ago
I feel you. X and some recovery critical software should have their reserved memory cgroup with some guaranteed, safe amount of physical memory and 0 swappiness. I speculate that on Windows it works so well because most of these stuff are in kernel space anyway.
mwpmaybe · 9 years ago
If you have an SSD, try setting vm.swappiness to 1 (not 0).
alsadi · 9 years ago
Just type

sudo swapoff -a sudo swapon -a

benibela · 9 years ago
Can't type while it is thrashing. Otherwise the offending program could just be killed
derefr · 9 years ago
What I've always been specifically confused about, is if there's any point in giving a VM a swap partition inside its virtual disk, rather than just giving it a lot of regular virtual memory (even overcommitting compared to the host's amount of memory) and then letting the host swap out some of that RAM to its swap partition.

Personally, I've never given VMs swap. I'd rather have memory pressure trigger horizontal scaling (or perhaps vertical rescaling, for things like DBMS nodes) than let Individual VMs struggle along under overloaded+degraded conditions.

tedunangst · 9 years ago
Generally yes. In fact, this is why "balloon" drivers exist, to allow the host to create backpressure and make the guest swap. The guest knows more about which pages are interesting than the host. If you make the host do the swapping, it will pick silly things, like the guest's disk cache, to write to swap.
scott_s · 9 years ago
For clarification to other readers, "Generally yes" was the reply to the originally posed question, which means the above comment actually disagrees with the suggested solution. (I had to read both a few times to get this straight.)
293984j29384 · 9 years ago
Ah, this is a great idea. It'd also be easier to understand and see service degradation (ie. physical memory being used on the host) directly from something like vCenter instead of relying upon Solarwinds to tell me the host is out of memory.
moftz · 9 years ago
But does the host actually know what is an appropriate thing to swap? It doesn't know what is contained in that chunk of memory it just swapped. Although ideally, you would just build the system to contain enough memory for each VM so they can each run at full capacity along with whatever else overhead it may need for your hypervisor. You wouldn't want the host swapping out anything related to your VMs because it's just going to kill any performance of the affected VM. Give each VM its own swap space and let the guest figure out what needs to be swapped.
sirn · 9 years ago
One usage of swap in modern systems: hibernation. If you need to use hibernation, that means a swap must exists, either as a swapfile (pre-allocated, as uswsusp require a fixed offset on the disk to resume) or as a partition.
lmm · 9 years ago
I've been reading these stories for ten years. About 8 years ago I started taking them seriously and stopped using swap. Turns out not having swap works much better. I'm amazed how slowly the consensus seems to be moving though.
njharman · 9 years ago
Systems are used for vastly different purposes. With different memory usages and expected operation.

There can be no consensus because there is no one answer.

jcrites · 9 years ago
We reached this same conclusion for our servers generally. The problem with swap is that it's unpredictable. It's better most of the time to have a system that's predictable. However much RAM is available to the system, you can deal with that, by making an appropriate choice of hardware type, or by scaling up, tuning software, etc. It's harder to deal with performance problems related to use of swap in my experience, since it's nondeterministic what will be swapped.
problems · 9 years ago
Yeah. I've had issues with this on some systems.

On Windows without swap when you hit a remotely low on RAM point, things start going really poorly for some reason - random latency. So with 16 GB of RAM even I can't disable swap on Windows without some really strange performance characteristics, I run SSDs so I really wanted it off and I just stuffed more RAM in my box - with 32 GB it isn't a problem.

On Linux however, you can pretty much turn it off and everything will run smooth until you're actually out and then you lag badly briefly, Linux's oom-killer does its thing and all is good again within the span of a few seconds.

jandrese · 9 years ago
I've noticed the same thing, Windows just becomes bizarrely cranky if you disable swap entirely. My solution was to instead leave it on, but limit it to just a couple of megabytes. That seems to avoid the VM subsystem freakouts thus far.
slededit · 9 years ago
On windows when you allocate it will gurantee it has the memory to fulfill the request at the time of the request. On Linux no check is made until you try to use the memory.

Because of this memory pressure will be higher on a windows box. Pageing helps paper over this as the commit can be billed to the page file not RAM. Windows is smart enough to not write anything to swap until you actually use the page so in practice this is rarely a problem.

The benefit to this approach is you actually have a hope of recovering from OOM.

ams6110 · 9 years ago
> Linux's oom-killer does its thing

Usually selecting sshd to kill, in my experience, rendering the server inaccessable.

rogerbinns · 9 years ago
Two examples of why I have swap:

* On a laptop to hibernate, which results in zero power consumption vs suspend which will drain the battery in a day or so

* I use tmpfs for /tmp and using swap as the backing is far more performant than regular filesystems

77pt77 · 9 years ago
> On a laptop to hibernate, which results in zero power consumption vs suspend which will drain the battery in a day or so

Swap is not strictly needed for this:

(it boils down to vm.swappiness=1)

https://wiki.debian.org/Hibernation/Hibernate_Without_Swap_P...

lmm · 9 years ago
> * I use tmpfs for /tmp and using swap as the backing is far more performant than regular filesystems

This seems absurd. You're running an in-memory filesystem backed by memory-on-disk? You weren't comparing to a journalled filesystem or something like that?

tossaway1 · 9 years ago
> I've been reading these stories for ten years. About 8 years ago I started taking them seriously and stopped using swap.

Not sure what you're referring to here. This story doesn't recommend eliminating swap...

tyingq · 9 years ago
"Systems without swap can make sense and are supported by Red Hat - just be sure the behaviour of such a system under memory pressure is what you want"

So, it doesn't exclusively recommend it, but it concedes that there are use cases where it makes sense.

contingencies · 9 years ago
Ditto, and over that period memory has become even cheaper.

I sort of wonder if we'll see a 100% RAM, large memory laptop soon that boots from an SD-card or in a cryptographically secure fashion over 4G wireless networks, aggressively disables RAM for power saving and suspends well.

pizzetta · 9 years ago
Aren't there legacy applications which expect swap where otherwise with modern applications swap isn't necessary? Or, at least that is my current (mis)-understanding...
phil21 · 9 years ago
This is by far my biggest pet peeve in the space. The "rule of thumb" that you need 2x RAM as swap. Even 10 years ago this "rule" was ancient and useless but it was always a constant challenge educating customers as to why, and that yes - we really did know better than your uncle Rob.

Once a server hits swap, it's dead. There is no recovering it other than for exceptional cases. If you are swapping out, you've already lost the battle.

I tend to configure servers with 512MB to 1GB swap simply so the kernel can swap out a couple hundred MB of pages it never uses - but that's really more to make people feel better than it really being useful at all.

thatcks · 9 years ago
Rules of thumb involving more swap than RAM probably date from decades ago, when Unix virtual memory systems were sufficiently primitive that the total amount of virtual memory you could use was just your swap space, not swap space plus (most of) RAM.

(The limitation came about because the simple way to handle swapping is to assign every potentially swappable page of virtual memory a swap address when you allocate it in the kernel. Then the kernel always knows that there's space for the page if it ever needs to swap it out and you're never faced with a situation where you need to swap out a page but there's no swap space left.)

toast0 · 9 years ago
2x RAM as swap is clearly bad, but I like having around 512MB to 1GB (on systems of basically any size); when you do start using more ram than you have, it gives you some buffer (as long as you actually alert on it). If you have a small memory leak, you can recover; if you have a large memory leak, you're going to run out of swap pretty quick anyway.