[OpenIndiana-discuss] Useful tidbit for ZFS backup via ZFS Send
martin at martux.org
Sat Sep 29 23:34:35 UTC 2012
On Sun, Sep 30, 2012 at 12:15 AM, Richard Elling
<richard.elling at richardelling.com> wrote:
> On Sep 29, 2012, at 6:46 AM, Bryan N Iotti <ironsides.medvet at gmail.com> wrote:
>> I searched online for a multithreaded version and came across pigz (http://www.zlib.net/pigz/). I downloaded it, modified the Makefile to use the Solaris Studio 12.3 cc with the proper CFLAGS ("-fast", for now, no -m64). I then copied both pigz and unpigz to a directory in my PATH and modified the last command to:
>> - zfs send -R rpool@<DATE> | /usr/local/bin/pigz > rpool.COMPLETE.<DATE>.gz
>> Now backups are compressed using multiple threads and the process takes about a quarter of the time.
> Why not send them to a dataset created for backup and setting compression=gzip?
> That way you get parallelized compression and no need to install anything :-)
> -- richard
I also wondered about this, but it seems like Bryan simply wants to
have them as distinct tar archives that you can move around with wget
Whatever, Bryan's pointer to pigz itself is interesting, completely
independently from ZFS.
Back to ZFS: He could also set the .zfs dir to visible and then access
all his various snapshots at wish, separately.
One advantage of his tar file solution is, that he can move his backup
to pools with an older ZFS version, if he wishes for some reason. Even
to UFS if he wants.
Of course it sounds a bit like a waste of work, CPU time and storage,
if he creates a tar file every time. But sometimes there are
situations where you simply want a tar file, and he certainly has his
I have my x64 system's pools set to gzip-9 and on SPARC I usually set
them all to gzip(-6) directly after creation.
But again (although it makes no sense if I compress a file that
resides in one of my pools), independently from that fact: pigz is
just interesting , no matter what.
More information about the OpenIndiana-discuss