[OpenIndiana-discuss] OI-hipster-gui-20210430.iso progress report on 3 virtualization system

Toomas Soome tsoome at me.com
Tue May 4 22:05:18 UTC 2021



> On 5. May 2021, at 00:41, Nelson H. F. Beebe <beebe at math.utah.edu> wrote:
> 
> I've now made installation attempts with the
> OI-hipster-gui-20210430.iso image on these three virtualization
> systems:
> 
> 	* CentoS 7.9    virt-manager/QEMU
> 	* Ubuntu 20.04  virt-manager/QEMU
> 	* Ubuntu 20.04  VirtualBox
> 
> Each VM is given 4GB DRAM, 4 CPUs, and 80GB disk; the latter is not
> partitioned, so ZFS uses the entire disk.  The disk image file is
> newly created, and so should have nothing but zero blocks.
> 
> On all three, the GUI installer worked as expected, and I selected a
> time zone (America/Denver), created one ordinary user account, and
> supplied a root password.  Installation completed normally, and the
> systems rebooted.
> 
> On the CentOS-based VM, the one on which I reported boot problems
> before, after the system rebooted, I logged in, used the network GUI
> tool to change to static IPv4 addressing, made one ZFS snapshot, ran
> "sync" (twice) then "poweroff", then took a virt-manager snapshot.  On
> the next reboot, I again got a similar problem to what I reported
> previously.
> 
> 	ZFS: i/o error - all block copies unavailable
> 	ZFS: failed to read pool rpool directory object


Instead of poweroff, if you run reboot, pause autoboot and then switch VM off, is it still broken?

Other option is to create mirror on rpool, if the system will come up, run zpool scrub and see if it will find anything.

rgds,
toomas


> 
> On the Ubuntu virt-manager VM, reboots are problem free, and the VM is
> fully configured with a large number of installed packages, and is
> working nicely as part of our test farm.
> 
> The Ubuntu VirtualBox VM built normally. and rebooted normally, so I
> took a ZFS snapshot, rebooted, and started to install packages: it
> seems normal so far.
> 
> I'm not going to spend time trying to resurrect the VM on CentOS 7,
> but I'm still willing to try building on that system additional VMs
> from newer ISO releases for OpenIndiana Hipster.
> 
> One might be inclined to consider the CentOS-based VM as an example of
> failure, or bugs, inside the host O/S, or inside QEMU, or perhaps even
> the physical workstation (a 2015-vintage HP Z440 with 128GB DRAM, and
> several TB of disk storage, both EXT4 and ZFS).  However, that machine
> runs 80 to 100 simultaneous VMs with other O/Ses, and has been rock
> solid in its almost six years of service.  That would tend to
> exonerate the hardware, and virt-manager/QEMU, suggesting that
> something inside OpenIndiana is causing the problem.  However, the
> success of two other VMs from the same ISO image indicates that
> OpenIndiana is solid.
> 
> My workstation is essentially one-of-a-kind, so there is no way for me
> to see whether an apparently identical box from the same vendor would
> also experience failure of an OpenIndiana VM.
> 
> 
> -------------------------------------------------------------------------------
> - Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
> - University of Utah                    FAX: +1 801 581 4148                  -
> - Department of Mathematics, 110 LCB    Internet e-mail: beebe at math.utah.edu  -
> - 155 S 1400 E RM 233                       beebe at acm.org  beebe at computer.org -
> - Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
> -------------------------------------------------------------------------------
> 
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss




More information about the openindiana-discuss mailing list