[OpenIndiana-discuss] OI 2020.10 install disk FAIL #2 !!!
Toomas Soome
tsoome at me.com
Mon Mar 1 06:06:00 UTC 2021
> On 1. Mar 2021, at 05:55, Reginald Beardsley via openindiana-discuss <openindiana-discuss at openindiana.org> wrote:
>
> The Debian derived gparted disk did not offer any zfs FS types.
>
> Out of curiosity I just booted FreeBSD 12.2 and messed with gpart. It does not offer "apple-zfs as an option. Aside from ZFS not being an Apple creation, it's rather perverse that in 2021 one would need to use the beta from another OS to partition a >2TB disk for OI.
>
> From my admin log book:
> ---------------------------------------------------------------------------------------------------------------------------
> 1-26-13
>
> oi_151a7
>
> text installer limited system to 2 TB of 3 TB disk
>
> backed up to shell and ran format which correctly detected disk
>
> successfully labeled disk with 2 slices of 128 GB and 2.6 GB
>
> created pools w/ zpool on both slices
>
> relabeling 3 TB disk using OI format(1m) runs into logic errors in the partition.
>
> solution is to do a "free hug" modify & take defaults for all slices, then rename s0
> -----------------------------------------------------------------------------------------------------------------------
>
> From there I moved the disk to my Solaris 10 u8 system which happily created the pools. I was also building an NL40 based system at the time and the log book gets a bit unclear. The s0 & s1 slices are the hallmark of my ZFS boot setup. Mirror on s0 and RAIDZ on s1. I don't think the Sol 10 instance has the 128 GB s0 slice now. As I'm pretty sure it will kernel panic if I run "format -e" and select the disk I'd rather not look.
>
> The message here is OI was capable of doing the geometry for a >2 TB disk at oi_151_a7. So for Hipster 2020.10 to not be able to do that is a considerable regression.
root at beastie:/var/log# zpool status
pool: rpool
state: ONLINE
scan: resilvered 1,68T in 0 days 10:10:07 with 0 errors on Fri Oct 25 05:05:34 2019
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t4d0 ONLINE 0 0 0
errors: No known data errors
root at beastie:/var/log# prtvtoc /dev/rdsk/c3t0d0
* /dev/rdsk/c3t0d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 7814037168 sectors
* 7814037101 accessible sectors
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 34 222 255
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 12 00 256 524288 524543
1 4 00 524544 7813496207 7814020750
8 11 00 7814020751 16384 7814037134
root at beastie:/var/log# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c3t0d0 <WDC-WD4004FZWX-00GBGB0-81.H0A81-3.64TB>
/pci at 0,0/pci15d9,805 at 1f,2/disk at 0,0
1. c3t1d0 <WDC-WD4005FZBX-00K5WB0-01.01A01-3.64TB>
/pci at 0,0/pci15d9,805 at 1f,2/disk at 1,0
2. c3t3d0 <WDC-WD4003FZEX-00Z4SA0-01.01A01-3.64TB>
/pci at 0,0/pci15d9,805 at 1f,2/disk at 3,0
3. c3t4d0 <WDC-WD4005FZBX-00K5WB0-01.01A01-3.64TB>
/pci at 0,0/pci15d9,805 at 1f,2/disk at 4,0
Specify disk (enter its number): 0
selecting c3t0d0
[disk formatted]
/dev/dsk/c3t0d0s1 is part of active ZFS pool rpool. Please see zpool(1M).
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> ver
Volume name = < >
ascii name = <WDC-WD4004FZWX-00GBGB0-81.H0A81-3.64TB>
bytes/sector = 512
sectors = 7814037168
accessible sectors = 7814020717
first usable sector = 34
last usable sector = 7814037134
Part Tag Flag First Sector Size Last Sector
0 system wm 256 256.00MB 524543
1 usr wm 524544 3.64TB 7814020750
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 7814020751 8.00MB 7814037134
format> ^D
root at beastie:/var/log#
Please note, VTOC label can not address large disks, you must use GPT with those. format command by default does offer VTOC, you can switch to GPT by running format -e (IMO this is bug, GPT should be default and VTOC should be available via expert mode, given how many people fail with partitioning).
Sharing disk by multiple pools is bad practice. It can be done, but you have to understand the consequences. Safest and easiest pick is to select whole disk setup from installer. whole disk setup will create GPT partitioning, it is large disk safe.
if you have any questions or additional information, please feel free to contact.
rgds,
toomas
>
> Reg
> On Sunday, February 28, 2021, 09:00:17 PM CST, John D Groenveld <groenveld at acm.org> wrote:
>
> In message <845919546.1414404.1614561339642 at mail.yahoo.com>, Reginald Beardsley
> via openindiana-discuss writes:
>> Following hints from others, I used a *working* copy of gparted to put a GPT l
>> abel on a 5 TB disk in advance of attempting to install OI.
>
> I booted the FreeBSD 13 Beta installer:
> <URL:https://download.freebsd.org/ftp/releases/ISO-IMAGES/13.0/>
> With gpart(8), I created a GPT scheme and then added an apple-zfs
> partition.
> Then I created an zpool named rpool with features disabled and
> exported it.
> # zpool create -d rpool ada0p1
> # zpool export rpool
>
> Using the OI text installer I was able to F5 to install to an existing
> pool.
> <URL:http://dlc.openindiana.org/>
>
>> I took photos of the screen should anyone question this
>
> As Josh Clulow noted, the OI text installer keeps a logfile in /tmp
> which would be helpful if you're interested in providing a bug report.
>
> John
> groenveld at acm.org
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the openindiana-discuss
mailing list