[OpenIndiana-discuss] cloning an OI system to VDMK
Jim Klimov
jimklimov at cos.ru
Thu Jul 27 20:49:04 UTC 2017
On July 27, 2017 7:09:58 PM GMT+02:00, Daniel Kjar <dkjar at elmira.edu> wrote:
>If I zfs send into the distro-const that I just installed on
>virtualbox can the source server keep running and do I have to livecd
>boot the VM I am moving the pool to or can that be up while I send the
>pool? I think I like the zfs send route better from an armchair
>perspective.
>
>
>On Thu, Jul 27, 2017 at 12:57 PM, Jim Klimov <jimklimov at cos.ru> wrote:
>> On July 27, 2017 5:05:19 PM GMT+02:00, Daniel Kjar <dkjar at elmira.edu>
>wrote:
>>>My old hardware is dying and I want to move my system to a
>virtualized
>>>environment. I have been digging for ways to do this but I am not
>sure
>>>
>>>if I am starting to get frustrated. I wanted to do flarcreate but
>>>apparently that never happened for solaris 11. I then looked at
>>>conversion P2V stuff but that led nowhere but to some proprietary
>tools
>>>
>>>I had don't have a license for. Currently I am installing a
>>>distro-const iso into Virtualbox but I suspect that is not going to
>>>produce what I need (a pure clone of my old machine).
>>>
>>>It is only a single rpool (file storage is NFSed in from another
>box).
>>>
>>>Any suggestions? zfs send the rpool to the new virtual machine I
>made
>>>using distro-const? Will that even work being the rpool and all? is
>>>there some super easy dd way that I am missing? This system has been
>>>running for over a decade and is crusty as hell, I doubt there is any
>>>way I could rebuild it from scratch so I would rather not. I don't
>>>have
>>>the time to try and get perl and imagemagick working together again.
>>>
>>>[root at bio2:~]>zpool status
>>> pool: rpool
>>> state: ONLINE
>>>scan: scrub repaired 0 in 1h58m with 0 errors on Mon Jan 2 16:31:55
>>>2017
>>>config:
>>>
>>> NAME STATE READ WRITE CKSUM
>>> rpool ONLINE 0 0 0
>>> mirror-0 ONLINE 0 0 0
>>> c4t0d0s0 ONLINE 0 0 0
>>> c4t1d0s0 ONLINE 0 0 0
>>>
>>>errors: No known data errors
>>>[root at bio2:~]>zfs list
>>>NAME USED AVAIL REFER MOUNTPOINT
>>>rpool 205G 23.6G 46.5K /rpool
>>>rpool/ROOT 192G 23.6G 31K legacy
>>>rpool/ROOT/openindiana 14.7M 23.6G 5.64G /
>>>rpool/ROOT/openindiana-1 50.8M 23.6G 6.34G /
>>>rpool/ROOT/openindiana-a8-1 42.0M 23.6G 98.8G /
>>>rpool/ROOT/openindiana-a8-2 14.2M 23.6G 104G /
>>>rpool/ROOT/openindiana-a8-3 192G 23.6G 119G /
>>>rpool/dump 6.00G 23.6G 6.00G -
>>>rpool/swap 6.38G 29.8G 135M -
>>>[root at bio2:~]>
>>>
>>>
>>>
>>>_______________________________________________
>>>openindiana-discuss mailing list
>>>openindiana-discuss at openindiana.org
>>>https://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>> You might have some luck starting the VM using an ISO image to get
>networking there and make an rpool, and then zfs-send the original
>machine's datasets recursively to the new one. After that you can
>enable the rootfs dataset and get the loader or grub to boot it up, and
>probably fiddle with various /etc/ files to address hardware changes.
>>
>> Otherwise, quite doable - I've done a fair bit of dual-booted systems
>with OI in a partition so it can run both as a native OS and as a
>VirtualBox from another OS (one at a time of course, and have to
>import-export rpool with firefly recovery image or a liveusb, to
>address storage device paths change), and systems set up initially in
>VBox and then expanded to physical hardware, and the opposite too.
>>
>> So while you can have some adventure on the technical side, the
>general approach certainly works, both ways.
>>
>> Jim
>> --
>> Typos courtesy of K-9 Mail on my Android
Regarding livecd vs. private distro-const - I think there is little practical difference as long as you can send the bits over the network and prepare the boot environment. Livecd is readily available and with tools needed to set up or repair a system, so I prefer that. Also it is on media separate from the pool you're receiving into, so less place for conflicts.
For zfs send - yes you can do `zfs snapshot -r rpool at mysnap-1` on a running system and send everything up till that point, and then add the incremental snapshot of differences between mysnap-1 and a subsequent mysnap-2 (of each dataset) to slurp the changes that happened on the original system while you were copying. Note that generally you should not change anything in datasets you are receiving into, if you intend to add data from newer snapshots of original system.
At that point zfs clones of snapshots on receiver and rsync can become viable tools to merge differences (e.g. as you pound the copy on VM into accepting new hardware, and then want to receive final changed user-data from original box).
Also `rsync -avPHK` can be useful to rearrange datasets - e.g. move /export/home or database contents to separate datasets, if now you have it all in one. This path may be detailed better in my articles on the Wiki, e.g. on split-root setups.
You can do this after initial migration though, assuming enough space in the VM, so you collect and solve problems one at a time ;)
Jim
--
Typos courtesy of K-9 Mail on my Android
More information about the openindiana-discuss
mailing list