[OpenIndiana-discuss] RAIDZ issue

Toomas Soome tsoome at me.com
Wed Feb 17 14:36:27 UTC 2021


Disconnect data disks, start from USB/CD and verify with lsdev -v the boot pool disks are there.

Sent from my iPhone

> On 17. Feb 2021, at 16:15, Thebest videos <sri.chityala504 at gmail.com> wrote:
> 
> 
> I've created 3disks as bootable disk, and remaining disks as raw disks. Then I ran 
> Zpool create -o altroot=/mnt datapool raidz1 disk1 disk2 disk3 disk4 raidz1 disk5 disk6 disk7 disk8
> Still getting same zfs I/o error
> 
> 
>> On Wed, Feb 17, 2021, 15:56 Toomas Soome via openindiana-discuss <openindiana-discuss at openindiana.org> wrote:
>> hi!
>> 
>> From your screenshot, you do get 5 disks recognized (disk0 - disk4), that means, those are disks you can use for booting. It *may* be the limit is higher with UEFI boot.
>> 
>> You can check the number of disks while booting from usb/cd, press esc to get out of boot menu, and enter: lsdev -v, loader will report only those disk devices which BIOS can see, and only those devices can be used for booting.
>> 
>> Since you are writing to OI list (and not freebsd), there is another way to check; in illumos you can get list of BIOS accessible disks with:
>> 
>> $ prtconf -v | grep biosdev
>>         name='biosdev-0x83' type=byte items=588
>>         name='biosdev-0x82' type=byte items=588
>>         name='biosdev-0x81' type=byte items=588
>>         name='biosdev-0x80' type=byte items=588
>> 
>> in this example, the system does have 4 BIOS-visible disk devices, and incidentally those are all disks this system does have, and I have:
>> 
>> tsoome at beastie:/code$ zpool status
>>   pool: rpool
>>  state: ONLINE
>>   scan: resilvered 1,68T in 0 days 10:10:07 with 0 errors on Fri Oct 25 05:05:34 2019
>> config:
>> 
>>         NAME        STATE     READ WRITE CKSUM
>>         rpool       ONLINE       0     0     0
>>           raidz1-0  ONLINE       0     0     0
>>             c3t0d0  ONLINE       0     0     0
>>             c3t1d0  ONLINE       0     0     0
>>             c3t3d0  ONLINE       0     0     0
>>             c3t4d0  ONLINE       0     0     0
>> 
>> errors: No known data errors
>> tsoome at beastie:/code$ 
>> 
>> 
>> Please note; in theory, if you have 5 visible disks, you could create boot pool by having those 5 disks for data + parity disks, but such configuration would not be advisable, because if one data disk will get an issue, then you can not boot (unless you swap around physical disks).
>> 
>> Therefore, the suggestion would be to verify, how many disks your system BIOS or UEFI can see, and plan the boot pool accordingly.
>> 
>> rgds,
>> toomas
>> 
>> > On 17. Feb 2021, at 11:26, Thebest videos <sri.chityala504 at gmail.com> wrote:
>> > 
>> > Hi there,
>> > 
>> > I'm facing one issue since long time.i.e., I'm trying to create raidz
>> > configuration with multiple disks as below
>> > Create RAIDZ for 4 disks each vdev in RAIDZ1/RAIDZ2 intotal 2 vdevs
>> > 
>> > I'm running below commands running for each hard disk
>> > 
>> > gpart create -s gpt ada0
>> > 
>> > gpart add -a 4k -s 512K -t freebsd-boot ada0
>> > 
>> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
>> > 
>> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
>> > 
>> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
>> > 
>> > 
>> > #RAIDZ creating
>> > 
>> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3
>> > raidz2 ada4p3 ada5p3 ada6p3 ada7p3
>> > 
>> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
>> > 
>> > mount -t zfs datapool/boot /mnt
>> > 
>> > mount_cd9660 /dev/cd0 /media
>> > 
>> > cp -r /media/* /mnt/.
>> > 
>> > zpool set bootfs=datapool/boot datapool
>> > 
>> > 
>> > I've tried both RAIDZ1 and RAIDZ2 getting same issue has been attached
>> > 
>> > 
>> > 2 attachments has attached screenshot 1 (17th feb) issue: when i create
>> > with 2vdevs with 4 disks each vdev
>> > 
>> > 2nd attachment (16th feb)issue : creating RAIDZ2 with 5 disks with each
>> > vdev total 2vdevs in total
>> > 
>> > 
>> > Kindly respond since i need to fix this issue anyway
>> > <Screenshot 2021-02-17 at 11.20.11 AM.png><Screenshot 2021-02-16 at 5.58.31 PM.png>_______________________________________________
>> > openindiana-discuss mailing list
>> > openindiana-discuss at openindiana.org
>> > https://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss


More information about the openindiana-discuss mailing list