[OpenIndiana-discuss] send/recv multi part zfs filesystem

Peter Tribble peter.tribble at gmail.com
Sun Jan 16 21:46:10 UTC 2022


Recursive zfs send and receive is a little non-obvious. It's a bit old
but some notes I wrote:

https://ptribble.blogspot.com/2012/09/recursive-zfs-send-and-receive.html

On Sat, Jan 15, 2022 at 2:41 PM hput via openindiana-discuss <
openindiana-discuss at openindiana.org> wrote:

> setup HOST ubuntu-21.10
> Vbox VM OI/hipster
> Installed from 2021.04.31
> updated 220114; uname -a
>    SunOS oi 5.11 illumos-c12f119108 i86pc i386 i86pc
>
> -------       -------       ---=---       -------       -------
>
> Having trouble getting send/recv working to expectations.  Maybe
> expectations are wrong.
>
> I studied up a bit in oracle docs on send/recv  But skimming thru
> again just now I don't see a zfs multipart fs being send/recv'd.
> Examples I find are all like: zfs send p0/one at snp | zfs recv p1/one
>
> Maybe using the -R operator but not sure if there are examples of
> mutipart zfs fs being send/recv 'ed I hav'nt studied beyond the basic
> send/recv.
>
> It looked much more complicated using the -R operator.
>
> So rsynced a bunch of data from a defunct windows machine using
> SystemRescue on usb stick to get access to the data.
>
> I rsync'ed some 283 GB of data to ( an OI running from Vbox vm) to
> p0/rhosts/2x2/F-win  most of F: drive on the windows mach
>
> rsync -vv -rlptgoD --stats /sdd1/ OI:/rhosts/2x2/F-win
>
> Eventually produced 283 GB of data at OI:/rhosts/2x2/F-win.  All in
> the last segment F-win.
>
> -------       -------       ---=---       -------       -------
>
> This may be a good place to explain that both file systems I worked on
> are setting on separate raidz2 pools.
>
> pool p0 has 9 180GB discs
> pool p1 has 6 190GB discs
>
> -------       -------       ---=---       -------       -------
>
> I then decided to try my hand at send/recv but on the one machine zfs
> fs to zfs fs.
>
> I created a similarly named p1/rhosts/2x2/F-win
>
> unmounted both fs and:
>
> zfs send p0/rhost/2x2/F-win at snap | zfs recv -F p1/rhosts/2x2/F-win
>
> And that started working but I had so much trouble from floundering
> like a large trout for hours that I gave up on it waiting to learn
> more.
>
> I did get the data moved from the win-mach to OI at least.
>
> -------       -------       ---=---       -------       -------
>
> OK, so I have a question or two but will start with this:
>
> Is it even possible to send/recv a multi segmented zfs FS to another
> multi segmented zfs fs.
>
> Or is it necessary to break it down so you are sending only 1 segment
> at one time.
>
> so like: p0/rhosts at snap to  p1/rhosts
>
>          p0/rhosts/2x2 at snap to p1/rhosts/2x2
>
> And so on till all segments are sent.
>
> I'd be happy enough to learn the way just above (send 1 segment at a
> time) is best but would really like to
>
> send/recv pool0/seg1/seg2/seg3 all at once to pool1/seg1/seg2/seg3
>
> One thing that seems dubious is that I'd be faced with which snapshot
> to use and I think that might rule the whole thing out.
>
> In fact I think I just talked myself out of the whole idea of sending all
> 'segs' at once.
>
>
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>


-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/


More information about the openindiana-discuss mailing list