[OpenIndiana-discuss] How to replace failed rpool mirror disk?

Bob Friesenhahn bfriesen at simple.dallas.tx.us
Sun Jan 17 20:11:18 UTC 2021


On Sun, 17 Jan 2021, Bob Friesenhahn wrote:
>
> It is my understanding that the partitioning/format of the existing disk came 
> from a from-scratch OpenIndiana Hipster install done perhaps a year ago.  If 
> this EFI partitioning is what is done now, the Wiki does not seem to reflect 
> it.
>
> The new disk is being resilvered into the root pool now.

It seems that there are new issues (or at least unexpected things) 
that I don't see addressed anywhere in the OpenIndiana documentation 
or Wiki:

I did this after adding my replacement disk to the root pool:

weerd:~# /usr/sbin/installboot -m /boot/pmbr /boot/gptzfsboot /dev/rdsk/c5t1d0s0
Booting pcfs from EFI labeled disks requires the boot partition.

So, now after adding this disk to the root pool, the partitioning info 
has entirely changed and there is no boot partition!

Existing disk:
=============

Total disk sectors available: 1953508717 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector          Size          Last 
Sector
   0     system    wm               256       256.00MB           524543
   1        usr    wm            524544       931.26GB           1953508750
   2 unassigned    wm                 0            0                0
   3 unassigned    wm                 0            0                0
   4 unassigned    wm                 0            0                0
   5 unassigned    wm                 0            0                0
   6 unassigned    wm                 0            0                0
   8   reserved    wm        1953508751         8.00MB           1953525134

weerd:~# prtvtoc /dev/rdsk/c5t0d0s2
* /dev/rdsk/c5t0d0s2 partition map
*
* Dimensions:
*     512 bytes/sector
* 1953525168 sectors
* 1953525101 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*          34       222       255
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
        0     12    00        256    524288    524543
        1      4    00     524544 1952984207 1953508750
        8     11    00  1953508751     16384 1953525134


Replacement for failed second disk:
===================================

partition> print
Current partition table (original):
Total disk sectors available: 1953508717 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector          Size          Last 
Sector
   0     system    wm               256       256.00MB           524543
   1        usr    wm            524544       931.26GB           1953508750
   2 unassigned    wm                 0            0                0
   3 unassigned    wm                 0            0                0
   4 unassigned    wm                 0            0                0
   5 unassigned    wm                 0            0                0
   6 unassigned    wm                 0            0                0
   8   reserved    wm        1953508751         8.00MB           1953525134

weerd:~# prtvtoc /dev/rdsk/c5t1d0s2
* /dev/rdsk/c5t1d0s2 partition map
*
* Dimensions:
*     512 bytes/sector
* 1953525168 sectors
* 1953525101 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*          34       222       255
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
        0     12    00        256    524288    524543
        1      4    00     524544 1952984207 1953508750
        8     11    00  1953508751     16384 1953525134

====

Now if I enter 'fdisk' for either disk it claims EFI partioning!

It seems that adding the disk to the zfs pool has converted its 
partioning from SMI to EFI.

So now (trusting that we should use modern documented methods rather 
than the antique methods described by the Wiki) I used

   /sbin/bootadm install-bootloader

and no errors were reported.  Adding the '-v' option also does not 
reveal significant issues and I see it referring to both disks.  This 
is the relevant high-level description of the bootadm 
install-bootloader option:

        This subcommand can be used to install, update, and repair the boot
        loader on a ZFS pool intended for booting. When disks in the ZFS pool
        used for booting the system have been replaced, one should run bootadm
        install-bootloader to ensure that all disks in that pool have the
        system boot loader installed.

There are more details in the detail section, but perhaps I must 
assume that all is ok with the boot loader.  It is even possible that 
zfs arranged to install the boot loader on the disk automatically.

Bob
-- 
Bob Friesenhahn
bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
Public Key,     http://www.simplesystems.org/users/bfriesen/public-key.txt



More information about the openindiana-discuss mailing list