[oi-dev] Change Request: Lowering dump and swap defaults AKA My dump is too big!

Alasdair Lumsden alasdairrr at gmail.com
Sun May 22 22:43:41 UTC 2011


Hi Deano,

On 22 May 2011, at 23:02, Deano wrote:

> Hello,
> Currently we have bug 1024 which relates to an install blocker due to the way the installer defaults swap and dump sizes.
> The relevant parts are in slim_source/usr/src/lib/install_target/controller.py
> Line 626 in the function calc_swap_dump_size
> The table explains that above 1GB swap defaults to half memory size, maxed to 32GB and dump above 0.5GB is half memory size, maxed to 16GB
>  
> So a machine with 32GB or more will have 16GB dump space and 16GB-32GB swap space, which is quite nasty when many of us are now using SSD as boot drives.

Yes, I've encountered this one myself, although it was in the text installer, which uses a slightly different algorithm IIRC.

> First question, why huge space for dump files anyway? How many people use that facility?

There was a discussion on #oi-dev recently - I personally felt that the dump device is unnecessary in most circumstances, as few people have time to send in crash dumps. If a crash is persistent, a dump device can be added. However others on the project felt the dump device is useful, as it means after a crash there is data there to work out what happened.

But I think we all agreed the dump sizes can be unnecessarily large.

> Second do we really use half memory for swap with large memory configs?

I'm no expert on the vm subsystem, but I believe Solaris doesn't allow overcommitting on virtual memory. So on Solaris you need swap to allow large mallocs even if the memory will never get used? I think that's how it works. So on large memory configs, you probably do need a lot of swap, otherwise you'll struggle to use all your RAM, as lots will be allocated but not used. I'd love someone to clear this up if thats not the case and my understanding is wrong.

> I suggest that we limit dump space to a small fraction, say 256MB (its minimum according to that function) and cap swap space by default to say 4 or 8 GB. This would seem to be more reasonable defaults to me, and both can be increased if required by a particular user.
> 
> With a swap default maximum of 8GB, this would reduce our minimal install size from roughly 4GB + 0.8 * RAM to roughly a max of 13 GB.
> In real figures, a server with 48GB RAM would require 13 GB of boot drive space versus the current 44 GB

I am absolutely for changing the algorithm/defaults for swap+dump to something far saner. I think a bigger dump and more swap is called for in higher memory situations, but getting the algorithm right is tricky.

Do you know if the installer knows how large the zpool is at the point it calculates how large the swap should be? We could for example size swap+dump based on how much RAM there is and how large the rpool is. You could have a space-constrained swap+dump for small drives, and another for larger drives. For example if swap+dump is going to be bigger than 25% of the rpool, change to allocating a minimal dump and capping swap+dump at 25%.

So on a machine with 64GB of RAM but a 50GB rpool, you'd get 12.25GB swap and 256MB dump. If the machine had 16GB RAM you'd get an 8GB swap and 4.5GB dump. Maybe we should cap the dump size at 2GB and simply recommend systems with larger kernel sizes (eg fileservers with lots of zfs filesystems) increase their dump size.

I'm also wondering if we could use a sparse zvol for the dump area with no refreservation. Yes, the dump will fail if theres not enough free space, but it would allow a bigger dump to be specified and the dump would succeed if there is free space for it to do so.

Lots to think about. We should definitely come to a conclusion on this before shipping 151 stable.

Cheers,

Alasdair





More information about the oi-dev mailing list