[OpenIndiana-discuss] Useful tidbit for ZFS backup via ZFS Send
Bryan N Iotti
ironsides.medvet at gmail.com
Sun Sep 30 09:18:49 UTC 2012
Guys,
This is just so that if I have to recover from a macroscopic failure,
I can just zfs receive from the .gz archive to rpool.
I don't need granular file recovery for this dataset and I found this
solution to be hassle-free and effective for my needs.
The important data on this machine, the stuff I really don't want to
lose, resides on 2x2TB SAS disks in a mirror configuration. I regularly
run some rsync scripts that synchronize the files with my backup server,
which unfortunately does not run Solaris.
I tried the file-based ZFS solution, I find that it becomes a hassle
with the mount points. Also, when I rsync the directory where this
(rather large) file resides, it considers the whole file as a new one
every time and copies it over.
The reason why I'm doing this without ZFS Compression is that I can have
.gz archives that I can move around freely between my machines if need
be. Compression ratio is not a problem. Now, with pigz, it also takes
little time.
The main thing is that pigz can also be decompressed with the regular
gunzip that is present on the OI live DVDs... That's also why I'm
settling for this compression scheme even if it is inferior.
Thank you all for your comments on the matter, I found them very useful
and informative.
Bryan
On 09/30/12 01:34 AM, Martin Bochnig wrote:
> On Sun, Sep 30, 2012 at 12:15 AM, Richard Elling
> <richard.elling at richardelling.com> wrote:
>> On Sep 29, 2012, at 6:46 AM, Bryan N Iotti <ironsides.medvet at gmail.com> wrote:
> [SNIP]
>>> I searched online for a multithreaded version and came across pigz (http://www.zlib.net/pigz/). I downloaded it, modified the Makefile to use the Solaris Studio 12.3 cc with the proper CFLAGS ("-fast", for now, no -m64). I then copied both pigz and unpigz to a directory in my PATH and modified the last command to:
>>> - zfs send -R rpool@<DATE> | /usr/local/bin/pigz > rpool.COMPLETE.<DATE>.gz
>>>
>>> Now backups are compressed using multiple threads and the process takes about a quarter of the time.
>> Why not send them to a dataset created for backup and setting compression=gzip?
>> That way you get parallelized compression and no need to install anything :-)
>> -- richard
>
>
> I also wondered about this, but it seems like Bryan simply wants to
> have them as distinct tar archives that you can move around with wget
> or sftp.
>
> Whatever, Bryan's pointer to pigz itself is interesting, completely
> independently from ZFS.
> Back to ZFS: He could also set the .zfs dir to visible and then access
> all his various snapshots at wish, separately.
> One advantage of his tar file solution is, that he can move his backup
> to pools with an older ZFS version, if he wishes for some reason. Even
> to UFS if he wants.
> Of course it sounds a bit like a waste of work, CPU time and storage,
> if he creates a tar file every time. But sometimes there are
> situations where you simply want a tar file, and he certainly has his
> reasons.
>
>
> I have my x64 system's pools set to gzip-9 and on SPARC I usually set
> them all to gzip(-6) directly after creation.
> But again (although it makes no sense if I compress a file that
> resides in one of my pools), independently from that fact: pigz is
> just interesting , no matter what.
>
>
>
> --
> regards,
> %martin bochnig
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the OpenIndiana-discuss
mailing list