[OpenIndiana-discuss] RAIDZ issue

Toomas Soome tsoome at me.com
Thu Feb 18 11:18:36 UTC 2021



> On 18. Feb 2021, at 12:52, Thebest videos <sri.chityala504 at gmail.com> wrote:
> 
> Ok, We also generated .img file using our custom OS from freebsd source. we are uploaded img file to digital ocean images. then we are creating droplet. everything working fine for basic operating system. but we are facing same issue at droplet side. atleast on virtual box we are able to create single vdev upto 5 disks and 2 vdevs with 3 disk each vdev(i mean upto 6 disks). but digital ocean side we are unable to create atleast single vdev with 3 disks. its working fine with 2 disks as mirror pool. we are raised the issue on digital ocean like any restrictions on number of disks towards the RAIDZ. but they says there is no constraints on number of disks. we can create as RAIDZ as many number of disks. we still don't understand where is the mistake. we also raised same query on freebsd forum but no response. as i already shares the manual steps which we are following to create partitions and RAIDZ configuration. are we making any mistake from commands which we are following towards RAIDZ configuration or as you said its king of restrictions on number of disks on virtual box and might digital ocean side. i mean restricitons on vender side?!.  Any guesses if it works(if no mistakes from commands we are using)if we attach CD/image to any bare metal server...?! or any suggestions?


I have no personal experience with digital ocean, but the basic test is the same; if you get loader OK prompt, use lsdev -v command to check how many disks you can actually see. There actually is another option too — with BIOS boot, when you see the very first spinner, press space key and you will get boot: prompt. This is very limited but still useful prompt from gptzfsboot proagram (the one which will try to find and start /boot/loader). On boot: prompt, you can enter: status — this will produce the same report as you get from lsdev.

So, if you know your VM should have, say, 10 disks, but boot: status or ok lsdev will show less, then you know, there must be BIOS limit (we do use BIOS INT13h to access the disks).

Please note, if the provider does offer option to use UEFI, it *may* support greater number of boot disks, the same check does apply with UEFI as well (lsdev -v).

rgds,
toomas


> These are commands we are using to create partitions and RAIDZ configuration
> NOTE: we are creating below gpart partitions(boot,swap,root) on all hard disks then adding those hard disks in zpool command
> Doubt: should we create partitions(boot,swap,root) on all hard disks to make part of RAIDZ configuration or is it enough to add in zpool as raw disks or making 2-3 disks as bootable then remaining as raw disks? anyway please check below commands we are using to create partitions and zpool configurations
>     gpart create -s gpt /dev/da0
>     gpart add -a 4k -s 512K -t freebsd-boot da0
>     gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
>     gpart add -a 1m -s 2G -t freebsd-swap -l swap1 da0
>     gpart add -a 1m -t freebsd-zfs -l disk1 da0
>    zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3 ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
>     zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
>     mount -t zfs datapool/boot /mnt
>     cp -r /temp/* /mnt/.
>     zpool set bootfs=datapool/boot datapool
>     zfs create -o mountpoint=/storage -o canmount=noauto datapool/storage
>     zfs create -o mountpoint=/conf -o canmount=noauto datapool/conf
>     shutdown and remove iso/img and start it again
>     zpool import datapool
>     mkdir /conf /storage
>     mount -t zfs datapool/conf /conf
>     mount -t zfs datapool/storage /storage
>     
> 
> 
> On Thu, Feb 18, 2021 at 3:33 PM Toomas Soome <tsoome at me.com <mailto:tsoome at me.com>> wrote:
> 
> 
>> On 18. Feb 2021, at 11:52, Thebest videos <sri.chityala504 at gmail.com <mailto:sri.chityala504 at gmail.com>> wrote:
>> 
>> as per your reply, im not clear 
>> although i've tried to create 2 pools with 4 disks(for testing purpose) each pool in a single vdev as expected it works. but that is not our requirement since we intended to chose single pool as many number of disks which should part of multiple vdev's based of condition(max 5 disks each vdev) and any disks left after part of vdev should act as spare disks.  
>> finally max 5 disks are coming ONLINE in vdev remaining disks going as says OFFLINE state disk state is UNKNOWN. is there anyway to fix this issue.
>> 
> 
> 
> If you want to use virtualbox, then there is limit that virtualbox does only see first 5 disk devices. This is vbox limit and there are only two options about it - either accept it or to file feature request to virtualbox developers.
> 
> Different systems can set different limits there, for example, VMware Fusion does support booting from first 12 disks. It also can have more disks than 12, but  only first 12 are visible for boot loader.
> 
> Real hardware is vendor specific.
> 
> rgds,
> toomas
> 
> 
>> On Thu, Feb 18, 2021 at 1:24 AM Toomas Soome via openindiana-discuss <openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>> wrote:
>> 
>> 
>> > On 17. Feb 2021, at 20:49, Thebest videos <sri.chityala504 at gmail.com <mailto:sri.chityala504 at gmail.com>> wrote:
>> > 
>> > NOTE: we are getting issues after shutdown , then remove ISO file from
>> > virtualBox then power on the server. if we attach an ISO file we are safe
>> > with our Zpool stuff. and we are creating boot,swap,root partitions on each
>> > disks.
>> 
>> vbox seems to have limit on boot disks - it appears to “see” 5, My vbox has IDE for boot disk, and I did add 6 sas disks, I only can see 5 — ide + 4 sas.
>> 
>> So all you need to do is to add disk for boot pool, and make sure it is first one - once kernel is up, it can see all the disks.
>> 
>> rgds,
>> toomas
>> 
>> 
>> > I'm not able to understand First 5 disks are ONLINE and remaining disks are
>> > UNKNOWN state after power off and then power on
>> > actually our requirement is to create RAIDZ1/RAIDZ2 with single vdev(upto 5
>> > disks per vdev) if more than 5 or less than 10 disks then those disks(after
>> > 5disks) are spare part shouldn't be included any vdev. if we have
>> > multiple's of 5 disks then we need to create multiple vdev in a pool
>> > example: RAIDZ2 : if total 7 disks then 5 disks as single vdev, remaining 2
>> > disks as spare parts nothing to do. and if we have 12 disks intotal then 2
>> > vdevs (5 disks per vdev) so total 10 disks in 2 vdevs remaining 2disks as
>> > spare.
>> > RAIDZ1: if we have only 3 disks then we should create RAIDZ1
>> > 
>> > Here, we wrote a zfs script for our requirements(but currently testing with
>> > manual commands). We are able to createRAIDZ2 with a single vdev in a pool
>> > for 5 disks. it works upto 9 disks but if we have 10 disks then 2 vdevs are
>> > created after power on the same error coming like zfs: i/o error all copies
>> > blocked.
>> > I was testing the RAIDZ like I'm creating 2 vdevs which have 3 disks per
>> > each vdev.its working fine even after shutdown and power on(as says that we
>> > are removing the ISO file after shutdown).
>> > but the issue is when we create 2 vdevs with 4 disks per each vdev.this
>> > time we are not getting error its giving options like we press esc button
>> > what kind of options we see those options are coming. if i type lsdev -v(as
>> > you said before). first 5 disks are online and the remaining 3 disks are
>> > UNKNOWN.
>> > 
>> > FInally, I need to setup RAIDZ configuration with 5 multiples of disks per
>> > each vdev.  please look once again below commands im using to create
>> > partitions and RAIDZ configuration
>> > 
>> > NOTE: below gpart commands are running for each disk
>> > 
>> > gpart create -s gpt ada0
>> > 
>> > 
>> > gpart add -a 4k -s 512K -t freebsd-boot ada0
>> > 
>> > 
>> > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
>> > 
>> > 
>> > gpart add -a 1m -s 2G -t freebsd-swap -l swap0 ada0
>> > 
>> > 
>> > gpart add -a 1m -t freebsd-zfs -l disk0 ada0
>> > 
>> > 
>> > zpool create -o altroot=/mnt datapool raidz2 ada0p3 ada1p3 ada2p3
>> > ada3p3  raidz2 ada4p3 ada5p3 ada6p3 ada7p3
>> > 
>> > 
>> > zfs create -o mountpoint=/ -o canmount=noauto datapool/boot
>> > 
>> > 
>> > mount -t zfs datapool/boot /mnt
>> > 
>> > 
>> > mount_cd9660 /dev/cd0 /media
>> > 
>> > 
>> > cp -r /media/* /mnt/.
>> > 
>> > 
>> > zpool set bootfs=datapool/boot datapool
>> > 
>> > 
>> > shutdown and remove ISO and power on the server
>> > 
>> > 
>> > kindly suggest me steps if im wrong
>> > 
>> > On Wed, Feb 17, 2021 at 11:51 PM Thebest videos <sri.chityala504 at gmail.com <mailto:sri.chityala504 at gmail.com>>
>> > wrote:
>> > 
>> >> prtconf -v | grep biosdev not working on freebsd
>> >> i think its legacy boot system(im not sure actually i didnt find anything
>> >> about EFI related stuff) is there anyway to check EFI
>> >> 
>> >> Create the pool with EFI boot:
>> >> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>> >> 
>> >> how can i create pool with EFI
>> >> and -B what it refers?
>> >> 
>> >> On Wed, Feb 17, 2021 at 11:00 PM John D Groenveld <groenveld at acm.org <mailto:groenveld at acm.org>>
>> >> wrote:
>> >> 
>> >>> In message <272389262.2537371.1613575739056 at mail.yahoo.com <mailto:272389262.2537371.1613575739056 at mail.yahoo.com>>, Reginald
>> >>> Beardsley
>> >>> via openindiana-discuss writes:
>> >>>> I was not aware that it was possible to boot from RAIDZ. It wasn't
>> >>> possible wh
>> >>> 
>> >>> With the current text installer, escape to a shell.
>> >>> Confirm the disks are all BIOS accessible:
>> >>> # prtconf -v | grep biosdev
>> >>> Create the pool with EFI boot:
>> >>> # zpool create -B rpool raidz c0t0d0 c0t1d0 c0t3d0
>> >>> Exit and return to the installer and then F5 Install to an Existing Pool
>> >>> 
>> >>> John
>> >>> groenveld at acm.org <mailto:groenveld at acm.org>
>> >>> 
>> >>> _______________________________________________
>> >>> openindiana-discuss mailing list
>> >>> openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>
>> >>> https://openindiana.org/mailman/listinfo/openindiana-discuss <https://openindiana.org/mailman/listinfo/openindiana-discuss>
>> >>> 
>> >> 
>> > _______________________________________________
>> > openindiana-discuss mailing list
>> > openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>
>> > https://openindiana.org/mailman/listinfo/openindiana-discuss <https://openindiana.org/mailman/listinfo/openindiana-discuss>
>> 
>> 
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org <mailto:openindiana-discuss at openindiana.org>
>> https://openindiana.org/mailman/listinfo/openindiana-discuss <https://openindiana.org/mailman/listinfo/openindiana-discuss>
>> <Screenshot 2021-02-18 at 12.38.35 PM.png>
> 



More information about the openindiana-discuss mailing list