[OpenIndiana-discuss] cloning an OI system to VDMK

dkjar at elmira.edu dkjar at elmira.edu
Thu Jul 27 22:52:04 UTC 2017


Yeah, I would prefer that too, but all I need is an  x2200m2 and what is offered as emulation is way more powerful and not my problem.  My x2200 has been going for 10 years flawlessly until they moved it and there you go.  I also look forward to having the vm snap on my workstation for playing and just moving the vm on to the main frame as needed.  

Dr. Daniel Kjar
Associate Professor of Biology
Elmira College

Sent from my Windows 10 phone

From: Paolo Aglialoro
Sent: Thursday, July 27, 2017 6:45 PM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] cloning an OI system to VDMK

What is the point of migrating a system which thrives on real HW ZFS to a
virtual environment where ZFS would become a heavy overhead useless
abstraction?

Your box is dying?
Buy new hardware!

Il 27 lug 2017 5:22 PM, "Daniel Kjar" <dkjar at elmira.edu> ha scritto:

> My old hardware is dying and I want to move my system to a virtualized
> environment.  I have been digging for ways to do this but I am not sure if
> I am starting to get frustrated.  I wanted to do flarcreate but apparently
> that never happened for solaris 11.  I then looked at conversion P2V stuff
> but that led nowhere but to some proprietary tools I had don't have a
> license for.  Currently I am installing a distro-const iso into Virtualbox
> but I suspect that is not going to produce what I need (a pure clone of my
> old machine).
>
> It is only  a single rpool (file storage is NFSed in from another box).
> Any suggestions?  zfs send the rpool to the new virtual machine I made
> using distro-const?  Will that even work being the rpool and all?  is there
> some super easy dd way that I am missing? This system has been running for
> over a decade and is crusty as hell, I doubt there is any way I could
> rebuild it from scratch so I would rather not.  I don't have the time to
> try and get perl and imagemagick working together again.
>
> [root at bio2:~]>zpool status
>   pool: rpool
>  state: ONLINE
>   scan: scrub repaired 0 in 1h58m with 0 errors on Mon Jan  2 16:31:55 2017
> config:
>
>         NAME          STATE     READ WRITE CKSUM
>         rpool         ONLINE       0     0     0
>           mirror-0    ONLINE       0     0     0
>             c4t0d0s0  ONLINE       0     0     0
>             c4t1d0s0  ONLINE       0     0     0
>
> errors: No known data errors
> [root at bio2:~]>zfs list
> NAME                          USED  AVAIL  REFER  MOUNTPOINT
> rpool                         205G  23.6G  46.5K  /rpool
> rpool/ROOT                    192G  23.6G    31K  legacy
> rpool/ROOT/openindiana       14.7M  23.6G  5.64G  /
> rpool/ROOT/openindiana-1     50.8M  23.6G  6.34G  /
> rpool/ROOT/openindiana-a8-1  42.0M  23.6G  98.8G  /
> rpool/ROOT/openindiana-a8-2  14.2M  23.6G   104G  /
> rpool/ROOT/openindiana-a8-3   192G  23.6G   119G  /
> rpool/dump                   6.00G  23.6G  6.00G  -
> rpool/swap                   6.38G  29.8G   135M  -
> [root at bio2:~]>
>
>
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
_______________________________________________
openindiana-discuss mailing list
openindiana-discuss at openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the openindiana-discuss mailing list