[OpenIndiana-discuss] Partitions for co-exist NAS drive
Toomas Soome
tsoome at me.com
Sat May 1 12:42:53 UTC 2021
> On 1. May 2021, at 15:30, Michelle <michelle at msknight.com> wrote:
>
> That's where I'm becoming unstuck.
>
> A Solaris 2 partition will only see the first 2Tb.
>
> There hangs my first problem.
>
> If I try and create any other partition it gives me the warning about
> the 2TB limit and if I then try and create the EFI partition, it won't
> co-exist with anything and wants to wipe the whole disk again.
>
> Michelle.
MBR and GPT can not co-exist. On this disk, you need GPT and this means, MBR partitions will be removed.
Rgds,
Toomas
>
>
>> On Sat, 2021-05-01 at 12:21 +0000, Reginald Beardsley via openindiana-
>> discuss wrote:
>>
>> I just went through several iterations of this, and like you the last
>> time I had done it was long ago. The following is based on
>> 2021.04.05.
>>
>> Large disks require an EFI or GPT label. The gparted program creates
>> 128 slices which is a bit much. format(1m) will also write an EFI
>> label which is usable with large disks.
>>
>>> format -e
>> # select disk
>>> fdisk
>> # create Solaris partition for entire drive and commit to disk
>>> partition
>> # create the desired slices and write an EFI label
>>> quit
>>> verify
>>> quit
>>
>> You should now have a Sun EFI label with 9 slices. 8 is set by
>> format(1m) and can't be changed. The other two should be the ones you
>> created.
>>
>> In the first text installer screen chose F5, in the 2nd screen select
>> the slice you want to use. Continue with the install. When It
>> completes reboot to single user.
>>
>> zfs create -V <size> rpool/dump rpool
>> zfs create -V <size> rpool/swap rpool
>> dumpadm -d /dev/zvol/dsk/rpool/dump
>> swap -a /dev/zvol/dsk/rpool/swap
>> touch /reconfigure
>> init 6
>>
>> You should now come up with rpool in the 100 GB slice.
>>
>> That said, we can boot from RAIDZ now. The text-install on the
>> Desktop live image will let you create a mirror, RAIDZ1 or RAIDZ2 and
>> will take care of all the label stuff. Despite the statement that it
>> will only use 2 TB, it in fact uses the entire disk.
>>
>> It creates a 250 MB s0 slice and the rest of the disk in s1. The 250
>> MB slice is labeled "System", but I've not seen any explanation of
>> it. I've also created RAIDZ2 pools by hand and used F5 to install
>> into them. F5 appears to be intended to install into a new BE in an
>> existing pool, hence the need to set up dump and swap by hand.
>>
>> Ultimately I decided I didn't care about 1 GB of unused space in 16
>> TB of space. So I just went with the text-install created RAIDZ2
>> pool. The reconfigure on the first boot after the install is critical
>> to getting 2021.04.05 up properly.
>>
>> Reg
>> On Saturday, May 1, 2021, 02:57:23 AM CDT, Michelle <
>> michelle at msknight.com> wrote:
>>
>> OK - I appear to be well out of touch.
>>
>> I booted the installer and went into prompt.
>>
>> Used format (only 1 x 6TB drive in the machine at this point) to
>> create
>> a new Solaris 2 partition table and then fdisk'd an all free hog to
>> partition 1, giving partition 0 100gig.
>>
>> I noticed that it must have gone on for 40 odd partitions, and also
>> there was none of the usual backup and reserved partitions for 2, 8
>> and
>> 9 as I saw before.
>>
>> On installation of OI, I selected the drive and got the warning...
>> "you have chosen a gpt labelled disk. installing onto a gpt labelled
>> disk will cause the loss of all existing data"
>>
>> Out of interest I continued through and got the options for whole
>> disk
>> or partition (MBR) ... the second of which gave me a 2Tb Solaris 2
>> partition in the list.
>>
>> I did try F5 to change partition, but it just took me straight back
>> to
>> the installation menu at the start again.
>>
>> Things have obviously moved on and I haven't kept pace.
>>
>> I now have to work out how to do this on a gpt drive.
>>
>> If anyone has any notes, I'd be grateful.
>>
>> Michelle.
>>
>>
>>> On Sat, 2021-05-01 at 08:31 +0100, Michelle wrote:
>>> Well, I looked over my notes and the last time I did this was in
>>> 2014.
>>>
>>> My preference has always been to run OI on its own drive and have
>>> the
>>> main ZFS tank as a "whole drive" basis. However, thanks to the
>>> QNAP,
>>> that's changed.
>>>
>>> In 2014 I did a test. I took two 40gig drives and did the
>>> partitions
>>> as
>>> an all free hog on partition 0 ... I was simply testing the ability
>>> to
>>> configure rpool on two drives and have both active, so if one
>>> failed
>>> the other would keep running the OS.
>>>
>>> My immediate thought is to have 100gig for the OS on partition 0
>>> and
>>> the rest on partition 1. Also, turn on auto expand for the tank
>>> pool
>>> and off for the rpool.
>>>
>>> That's my gut feel.
>>>
>>> Anyone got any advice to offer please, before I commit finger to
>>> keyboard?
>>>
>>> Michelle.
>>>
>>>
>>> _______________________________________________
>>> openindiana-discuss mailing list
>>> openindiana-discuss at openindiana.org
>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the openindiana-discuss
mailing list