[OpenIndiana-discuss] Full disk install of 2021.04_rc1 on 5 TB disk

Reginald Beardsley pulaskite at yahoo.com
Sat Apr 10 00:44:33 UTC 2021


 At the time I created the 2 slice arrangement, it was not possible to boot from a RAIDZ array. Over the course of 10 years I have found that it is very robust. It has survived several disk failures and more recently a MB failure without loss of data on my S10_u8 system. That is not to be sneezed at.

Is it still justified? I don't know and testing it would be rather more work than I want to commit for a few cents of disk space.

I am not aware of any disadvantage to it other than having a 3-5 way mirror on the root pool for a RAIDZ[123] pool on the same drives wastes a few pennies per disk. A 100 GB slice should be good for around 10 or more BEs.

In the case of a 5 disk array, I might well put a full OS image in a 2 way mirrored recovery BE with a 3 way mirror for the regular BE. I configured my Sun 3/60 to boot miniroot from a sliver of disk space by changing the DIAG switch setting to avoid having to wait on reading the 1/4" tape miniroot into swap. Not appropriate in a work setting, but just fine for a home system. I don't recall that I ever needed it, but it was nice to have.

There is no "one size fits all" in computing. The s0 & s1 configuration fits my use case well.

Have Fun!
Reg

     On Friday, April 9, 2021, 06:33:42 PM CDT, Joshua M. Clulow <josh at sysmgr.org> wrote:  
 
 On Fri, 9 Apr 2021 at 16:15, Reginald Beardsley via
openindiana-discuss <openindiana-discuss at openindiana.org> wrote:
> Here's what the system reports after the initial reboot.  The console log while I fumbled around with "format -e" and "I'm sorry. I can't do that."  was too crazy to bother cleaning up.  I'd have to do it all over.  At this point I know how to get what I want.  So I plan to order large disks and do my s0 & s1 layout.
>
> I can see no reason that large disks can't be handled in the GUI installer and present the user with a 9 slice table as per traditional Sun SMI labels.  I had that until I decided I'd demonstrate a full disk install.  Should have quit while I was ahead.  I plan to go back to the 100 GB root pool in s0 and the rest in s1, so I'll try to capture that process more accurately when I build out the new disk array.

Out of interest, why not just "zfs create -o mountpoint=/export
rpool/export" to get a separate /export file system, instead of the
slices and separate pools?


Cheers.

-- 
Joshua M. Clulow
http://blog.sysmgr.org
  


More information about the openindiana-discuss mailing list