[OpenIndiana-discuss] RAIDZ issue

Toomas Soome tsoome at me.com
Thu Feb 18 10:03:15 UTC 2021



> On 18. Feb 2021, at 11:52, Thebest videos <sri.chityala504 at gmail.com> wrote:
> 
> as per your reply, im not clear 
> although i've tried to create 2 pools with 4 disks(for testing purpose) each pool in a single vdev as expected it works. but that is not our requirement since we intended to chose single pool as many number of disks which should part of multiple vdev's based of condition(max 5 disks each vdev) and any disks left after part of vdev should act as spare disks.  
> finally max 5 disks are coming ONLINE in vdev remaining disks going as says OFFLINE state disk state is UNKNOWN. is there anyway to fix this issue.
> 


If you want to use virtualbox, then there is limit that virtualbox does only see first 5 disk devices. This is vbox limit and there are only two options about it - either accept it or to file feature request to virtualbox developers.

Different systems can set different limits there, for example, VMware Fusion does support booting from first 12 disks. It also can have more disks than 12, but  only first 12 are visible for boot loader.

Real hardware is vendor specific.

rgds,
toomas


> On Thu, Feb 18, 2021 at 1:24 AM Toomas Soome via openindiana-discuss <openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>> wrote:
> 
> 
> > On 17. Feb 2021, at 20:49, Thebest videos <sri.chityala504 at gmail.com <mailto:sri.chityala504 at gmail.com>> wrote:
> > 
> > NOTE: we are getting issues after shutdown , then remove ISO file from
> > virtualBox then power on the server. if we attach an ISO file we are safe
> > with our Zpool stuff. and we are creating boot,swap,root partitions on each
> > disks.
> 
> vbox seems to have limit on boot disks - it appears to “see” 5, My vbox has IDE for boot disk, and I did add 6 sas disks, I only can see 5 — ide + 4 sas.
> 
> So all you need to do is to add disk for boot pool, and make sure it is first one - once kernel is up, it can see all the disks.
> 
> rgds,
> toomas
> 
> 
> > I'm not able to understand First 5 disks are ONLINE and remaining disks are
> > UNKNOWN state after power off and then power on
> > actually our requirement is to create RAIDZ1/RAIDZ2 with single vdev(upto 5
> > disks per vdev) if more than 5 or less than 10 disks then those disks(after
> > 5disks) are spare part shouldn't be included any vdev. if we have
> > multiple's of 5 disks then we need to create multiple vdev in a pool
> > example: RAIDZ2 : if total 7 disks then 5 disks as single vdev, remaining 2
> > disks as spare parts nothing to do. and if we have 12 disks intotal then 2
> > vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks as
> > spare.
> > RAIDZ1: if we have only 3 disks then we should create RAIDZ1
> > 
> > Here, we wrote a zfs script for our requirements(but currently testing with
> > manual commands). We are able to createRAIDZ2 with a single vdev in a pool
> > for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs are
> > created after power on the same error coming like zfs: i/o error all copies
> > blocked.
> > I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
> > each vdev.its working fine even after shutdown and power on(as says that we
> > are removing the ISO file after shutdown).
> > but the issue is when we create 2 vdevs with 4 disks per each vdev.this
> > time we are not getting error its giving options like we press esc button
> > what kind of options we see those options are coming. if i type lsdev -v(as
> > you said before). first 5 disks are online and the remaining 3 disks are
> > UNKNOWN.
> > 
> > FInally, I need to setup RAIDZ configuration with 5 multiples of disks per
> > each vdev.  please look once again below commands im using to create
> > partitions and RAIDZ configuration
> > 
> > NOTE: below gpart commands are running for each disk
> > 
> > gpart create -s gpt ada0
> > 
> > 
> > gpart add -a 4k -s 512K -t freebsd-boot ada0
> > 
> > 
> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
> > 
> > 
> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
> > 
> > 
> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
> > 
> > 
> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
> > ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
> > 
> > 
> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
> > 
> > 
> > mount -t zfs datapool/boot /mnt
> > 
> > 
> > mount_cd9660 /dev/cd0 /media
> > 
> > 
> > cp -r /media/* /mnt/.
> > 
> > 
> > zpool set bootfs=datapool/boot datapool
> > 
> > 
> > shutdown and remove ISO and power on the server
> > 
> > 
> > kindly suggest me steps if im wrong
> > 
> > On Wed, Feb 17, 2021 at 11:51 PM Thebest videos <sri.chityala504 at gmail.com <mailto:sri.chityala504 at gmail.com>>
> > wrote:
> > 
> >> prtconf -v | grep biosdev not working on freebsd
> >> i think its legacy boot system(im not sure actually i didnt find anything
> >> about EFI related stuff) is there anyway to check EFI
> >> 
> >> Create the pool with EFI boot:
> >> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
> >> 
> >> how can i create pool with EFI
> >> and -B what it refers?
> >> 
> >> On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld <groenveld at acm.org <mailto:groenveld at acm.org>>
> >> wrote:
> >> 
> >>> In message <272389262.2537371.1613575739056 at mail.yahoo.com <mailto:272389262.2537371.1613575739056 at mail.yahoo.com>>, Reginald
> >>> Beardsley
> >>> via openindiana-discuss writes:
> >>>> I was not aware that it was possible to boot from RAIDZ. It wasn't
> >>> possible wh
> >>> 
> >>> With the current text installer, escape to a shell.
> >>> Confirm the disks are all BIOS accessible:
> >>> # prtconf -v | grep biosdev
> >>> Create the pool with EFI boot:
> >>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
> >>> Exit and return to the installer and then F5 Install to an Existing Pool
> >>> 
> >>> John
> >>> groenveld at acm.org <mailto:groenveld at acm.org>
> >>> 
> >>> _______________________________________________
> >>> openindiana-discuss mailing list
> >>> openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>
> >>> https://openindiana.org/mailman/listinfo/openindiana-discuss <https://openindiana.org/mailman/listinfo/openindiana-discuss>
> >>> 
> >> 
> > _______________________________________________
> > openindiana-discuss mailing list
> > openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>
> > https://openindiana.org/mailman/listinfo/openindiana-discuss <https://openindiana.org/mailman/listinfo/openindiana-discuss>
> 
> 
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>
> https://openindiana.org/mailman/listinfo/openindiana-discuss <https://openindiana.org/mailman/listinfo/openindiana-discuss>
> <Screenshot 2021-02-18 at 12.38.35 PM.png>



More information about the openindiana-discuss mailing list