[OpenIndiana-discuss] Useful tidbit for ZFS backup via ZFS Send

Bryan N Iotti ironsides.medvet at gmail.com
Sat Sep 29 14:44:18 UTC 2012


Open the Makefile with gedit.

First line says CC=cc. Delete the whole line. I use 'export 
CC=/opt/solarisstudio12.3/bin/cc' from the shell before running make.

Second line says CFLAGS=-O3 -Wall -Wextra. This works fine for GCC, 
Solaris Studio doesn't like -Wall. Just change it to CFLAGS=-fast

Then on lines 18, 21 and 27 you find the same CFLAGS as before.
Remove -Wall -O3 -DDEBUG and write $(CFLAGS) on each line. Leave -DNOTHREAD

Save the file. Open a terminal in the folder. Use the command "export 
CC=<your location for the sun studio compiler>". Then run make.

It will build pigz and unpigz. Copy them to a directory in your path. Done.

These steps are only necessary to run the Solaris Studio C compiler, GCC 
only needs the deletion of the first line, along with the command 
"export CC=<your location for gcc>".

Bryan

On 09/29/12 04:17 PM, Roel_D wrote:
> How did you modify the make-file? Is there a general howto for this alter action?
>
> Kind regards,
>
> The out-side
>
> Op 29 sep. 2012 om 15:46 heeft Bryan N Iotti <ironsides.medvet at gmail.com> het volgende geschreven:
>
>> Hi all,
>>
>> thought you'd like to know the following...
>>
>> I have my rpool on a 146GB SCSI 15K rpm disk.
>>
>> I regularly back it up with the following sequence of commands:
>> - zfs snapshot -r rpool@<DATE>
>> - cd to backup dir and su
>> - zfs send -R rpool@<DATE> | gzip > rpool.COMPLETE.<DATE>.gz
>>
>> ... as per Oracle manual.
>>
>> I was wondering why it was so slow, taking a couple of hours, then I payed attention to my CPU meter and understood that the normal gzip was running as a single thread.
>>
>> I searched online for a multithreaded version and came across pigz (http://www.zlib.net/pigz/). I downloaded it, modified the Makefile to use the Solaris Studio 12.3 cc with the proper CFLAGS ("-fast", for now, no -m64). I then copied both pigz and unpigz to a directory in my PATH and modified the last command to:
>> - zfs send -R rpool@<DATE> | /usr/local/bin/pigz > rpool.COMPLETE.<DATE>.gz
>>
>> Now backups are compressed using multiple threads and the process takes about a quarter of the time.
>>
>> Pigz apparently splits the input into blocks of a user definable size (128kb being the default) and allows for parallel compression, using all available threads (or less, with the user -p #threads option).
>>
>> Hope this helps some of you. Would be nice to have in the pkg repo.
>>
>> Bryan
>>
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss




More information about the OpenIndiana-discuss mailing list