[OpenIndiana-discuss] problem adding a mirror disk to rpool

stes@PANDORA.BE stes at telenet.be
Tue Jan 3 16:54:38 UTC 2023


Do you have a particular reason to use slice s0 instead of using the whole disk ?

https://illumos.org/man/8/zpool

"disks       ZFS can use
             individual slices or partitions, though the recommended mode of
             operation is to use whole disks.  A disk can be specified by a full
             path, or it can be a shorthand name (the relative portion of the
             path under /dev/dsk).  A whole disk can be specified by omitting
             the slice or partition designation.  For example, c0t0d0 is
             equivalent to /dev/dsk/c0t0d0s2."

So I suppose you can have a rpool with "c7d0s0" slice 0 as in your example (note the "s0"),
but the manpage of zpool gives as a typical or "recommended" example c0t0d0 "whole disk".

I used whole disk mode and I did not use fmthard or prtvtoc to attach a disk to a mirror.
zpool did the format (partitioning) itself.

I used two SATA disks of different vendors (Seagate and Western Digital) but same size.

The bootadm manpage says that you have to run install-bootloader with the -M option.

https://illumos.org/man/8/bootadm

"      When disks in the ZFS pool
       used for booting the system have been replaced, one should run bootadm
       install-bootloader to ensure that all disks in that pool have the system
       boot loader installed."

HOWEVER in your case, I'm not so sure which slice is going to be used.

You could try to run with the -M option but this may not be such a good idea:


"       -M

           On x86 systems, in an install-bootloader operation, additionally
           installs the system boot loader to the MBR (master boot record). For
           more information, see the discussion of install-bootloader in the
           SUBCOMMANDS section.

           This option is not supported on non-x86 systems, and it is an error
           to specify it."


I ran this option on the whole disks in an rpool mirror BUT ... as indicated this is a mirror with "whole disk".

Your rpool mirror is different as it seems to be built on a slice.


----- Op 3 jan 2023 om 11:32 schreef Marc Lobelle marc.lobelle at uclouvain.be:

> Indeed, finally, I created on the second disk of the mirror, with fdisk,
> the same partition (solaris 2, 100%, bootable) as on the first disk,
> then I copied the vtoc using prtvtoc anf fmthard, then used zpool
> replace and zpool clean and everything is now OK, according to zpool
> status. It was the fdisk phase that was missing in my original procedure.
> 
> Did the resilvering also copy the booting stuff or should I run
> installgrub ?
> 
> Thank you all for your help
> 
> Marc
> 
> On 1/3/23 03:40, Reginald Beardsley via openindiana-discuss wrote:
>>   Partition the disk so you have a partition that matches the size of your other
>>   disks in the pool. Then mount that partition.
>>
>>
>>       On Monday, January 2, 2023, 08:31:38 PM CST, Marc Lobelle
>>       <marc.lobelle at uclouvain.be> wrote:
>>   
>>   Hello,
>>
>> First, Best wished for 2023 to everybody !
>>
>> I tried to add a second 480Gb disk to my rpool (different manufacturer
>> and slightly larger)
>>
>> Below is what I did, but apparently there is a problem and the case
>> (adding a mirror disk) is not discussed in
>> https://illumos.org/msg/ZFS-8000-4J/
>>
>> Anybody has an idea on how to solve this issue ? I did not yet
>> installgrub on the new disk because I fear it would make things worse.
>>
>> Thanks
>>
>> Marc
>>
>> ml at spitfire:/home/ml# zpool status rpool
>>     pool: rpool
>>    state: ONLINE
>>     scan: scrub repaired 0 in 0 days 00:05:41 with 0 errors on Wed Dec 28
>> 16:43:12 2022
>> config:
>>
>>           NAME        STATE     READ WRITE CKSUM
>>           rpool       ONLINE       0     0     0
>>             c7d0s0    ONLINE       0     0     0
>>
>> errors: No known data errors
>> ml at spitfire:/home/ml# format
>> Searching for disks...done
>>
>>
>> AVAILABLE DISK SELECTIONS:
>>          0. c5d0 <Samsung SSD 870 QVO
>> 4TB=S5STNF0T406608A-S5STNF0T406608A-0001-3.64TB>
>>             /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0
>>          1. c6d0 <Samsung SSD 870 QVO
>> 4TB=S5STNF0T406604J-S5STNF0T406604J-0001-3.64TB>
>>             /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0
>>          2. c7d0 <Unknown-Unknown-0001 cyl 58366 alt 2 hd 255 sec 63>
>>             /pci at 0,0/pci-ide at 1f,5/ide at 0/cmdk at 0,0
>>          3. c8d0 <Unknown-Unknown-0001 cyl 58367 alt 2 hd 255 sec 63>
>>             /pci at 0,0/pci-ide at 1f,5/ide at 1/cmdk at 0,0
>> Specify disk (enter its number): ^C
>> ml at spitfire:/home/ml# prtvtoc /dev/rdsk/c7d0s0
>> * /dev/rdsk/c7d0s0 partition map
>> *
>> * Dimensions:
>> *         512 bytes/sector
>> *          63 sectors/track
>> *         255 tracks/cylinder
>> *       16065 sectors/cylinder
>> *       58368 cylinders
>> *       58366 accessible cylinders
>> *   937681920 sectors
>> *   937649790 accessible sectors
>> *
>> * Flags:
>> *   1: unmountable
>> *  10: read-only
>> *
>> * Unallocated space:
>> *         First       Sector      Last
>> *         Sector       Count      Sector
>> *             0       16065       16064
>> *
>> *                            First       Sector      Last
>> * Partition  Tag  Flags      Sector       Count      Sector  Mount Directory
>>          0      2    00        16065   937633725   937649789
>>          2      5    01            0   937649790   937649789
>>          8      1    01            0       16065       16064
>> ml at spitfire:/home/ml# prtvtoc /dev/rdsk/c7d0s0|fmthard -s - /dev/rdsk/c8d0s0
>> fmthard: Partition 2 specifies the full disk and is not equal
>> full size of disk.  The full disk capacity is 937665855 sectors.
>> fmthard:  New volume table of contents now in place.
>> ml at spitfire:/home/ml# zpool attach -f rpool c7d0s0 c8d0s0
>> Make sure to wait until resilver is done before rebooting.
>> ml at spitfire:/home/ml# zpool status rpool
>>     pool: rpool
>>    state: ONLINE
>> status: One or more devices is currently being resilvered.  The pool will
>>           continue to function, possibly in a degraded state.
>> action: Wait for the resilver to complete.
>>     scan: resilver in progress since Mon Jan  2 11:59:05 2023
>>           56,4G scanned at 2,56G/s, 909M issued at 41,3M/s, 56,4G total
>>           910M resilvered, 1,57% done, 0 days 00:22:55 to go
>> config:
>>
>>           NAME        STATE     READ WRITE CKSUM
>>           rpool       ONLINE       0     0     0
>>             mirror-0  ONLINE       0     0     0
>>               c7d0s0  ONLINE       0     0     0
>>               c8d0s0  ONLINE       0     0     0  (resilvering)
>>
>> errors: No known data errors
>> ml at spitfire:/home/ml# zpool status rpool
>>     pool: rpool
>>    state: DEGRADED
>> status: One or more devices could not be used because the label is
>> missing or
>>           invalid.  Sufficient replicas exist for the pool to continue
>>           functioning in a degraded state.
>> action: Replace the device using 'zpool replace'.
>>      see: http://illumos.org/msg/ZFS-8000-4J
>>     scan: resilvered 34,6G in 0 days 00:06:23 with 0 errors on Mon Jan  2
>> 12:05:28 2023
>> config:
>>
>>           NAME        STATE     READ WRITE CKSUM
>>           rpool       DEGRADED     0     0     0
>>             mirror-0  DEGRADED     0     0     0
>>               c7d0s0  ONLINE       0     0     0
>>               c8d0s0  UNAVAIL      0 57,4K     0  corrupted data
>>
>> errors: No known data errors
>> ml at spitfire:/home/ml#
>>
>>
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>    
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the openindiana-discuss mailing list