[OpenIndiana-discuss] Useful tidbit for ZFS backup via ZFS Send

Richard Elling richard.elling at richardelling.com
Sat Sep 29 22:15:33 UTC 2012

On Sep 29, 2012, at 6:46 AM, Bryan N Iotti <ironsides.medvet at gmail.com> wrote:

> Hi all,
> thought you'd like to know the following...
> I have my rpool on a 146GB SCSI 15K rpm disk.
> I regularly back it up with the following sequence of commands:
> - zfs snapshot -r rpool@<DATE>
> - cd to backup dir and su
> - zfs send -R rpool@<DATE> | gzip > rpool.COMPLETE.<DATE>.gz
> ... as per Oracle manual.
> I was wondering why it was so slow, taking a couple of hours, then I payed attention to my CPU meter and understood that the normal gzip was running as a single thread.
> I searched online for a multithreaded version and came across pigz (http://www.zlib.net/pigz/). I downloaded it, modified the Makefile to use the Solaris Studio 12.3 cc with the proper CFLAGS ("-fast", for now, no -m64). I then copied both pigz and unpigz to a directory in my PATH and modified the last command to:
> - zfs send -R rpool@<DATE> | /usr/local/bin/pigz > rpool.COMPLETE.<DATE>.gz
> Now backups are compressed using multiple threads and the process takes about a quarter of the time.

Why not send them to a dataset created for backup and setting compression=gzip?
That way you get parallelized compression and no need to install anything :-)
 -- richard

> Pigz apparently splits the input into blocks of a user definable size (128kb being the default) and allows for parallel compression, using all available threads (or less, with the user -p #threads option).
> Hope this helps some of you. Would be nice to have in the pkg repo.
> Bryan
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss

illumos Day & ZFS Day, Oct 1-2, 2012 San Fransisco 
Richard.Elling at RichardElling.com

More information about the OpenIndiana-discuss mailing list