[OpenIndiana-discuss] Full disk install of 2021.04_rc1 on 5 TB disk

Reginald Beardsley pulaskite at yahoo.com
Sat Apr 10 01:21:36 UTC 2021


 
Suppose you have a failing disk controller. In a RAIDZ1 context it could corrupt 2-3 drives in a 3 drive pool and you would lose the pool. By having the pool in a separate slice from the root pool, you can take it offline and reduce the risk to the contents.

That's something I just went through. 

The s0 & s1 split is really only for a small system setup and was a way of running RAIDZ1/RAIDZ2 on systems with room for only 3-4 drives when you could not boot RAIDZ.

With the ability to boot from RAIDZ it doesn't have as much value, but I think it still has some.

Have Fun!
Reg

     On Friday, April 9, 2021, 06:33:42 PM CDT, Joshua M. Clulow <josh at sysmgr.org> wrote:  
 
 On Fri, 9 Apr 2021 at 16:15, Reginald Beardsley via
openindiana-discuss <openindiana-discuss at openindiana.org> wrote:
> Here's what the system reports after the initial reboot.  The console log while I fumbled around with "format -e" and "I'm sorry. I can't do that."  was too crazy to bother cleaning up.  I'd have to do it all over.  At this point I know how to get what I want.  So I plan to order large disks and do my s0 & s1 layout.
>
> I can see no reason that large disks can't be handled in the GUI installer and present the user with a 9 slice table as per traditional Sun SMI labels.  I had that until I decided I'd demonstrate a full disk install.  Should have quit while I was ahead.  I plan to go back to the 100 GB root pool in s0 and the rest in s1, so I'll try to capture that process more accurately when I build out the new disk array.

Out of interest, why not just "zfs create -o mountpoint=/export
rpool/export" to get a separate /export file system, instead of the
slices and separate pools?


Cheers.

-- 
Joshua M. Clulow
http://blog.sysmgr.org
  


More information about the openindiana-discuss mailing list