[OpenIndiana-discuss] [oi-dev] [discuss] Making OI friendly to new users

Toomas Soome tsoome at me.com
Mon Aug 18 06:32:49 UTC 2025



> On 18. Aug 2025, at 01:43, Rolf M. Dietze <rmd at orbit.in-berlin.de> wrote:
> 
> ok, doubel parity might be a reason, but on the rpool? I only have
> the OS installation on rpool, on systems with bigger rpools, there
> might be /opt in the rpool as well. It is only a few thing written
> to rpool, e.g. logs etc. User-date belongs in a diffrent place e.g.
> nfs or local whatever redundant level of storage. But terrabytes
> or rpool do sound a bit surprising for me. For instance, on file-
> servers a 16GB compact flash or sd-card is plenty of storage and
> fine, but that varies with what one wants to have in the rpool
> 
> And yes, I do need a bit more than the proposed 5 minutes, my mostly
> USB2 USB-sticks aren't that fast and network speed is limited some-
> where in between as well:)
> 
> regs, Rolf
> 

rpool is nothing special compared to any other data pool, *except* the devices used to build rpool must be accessible and readable by boot loader. Other than that, it really depends on what the hardware does allow you to do and what kind of resilience and performance you do need.

this system below was built:

History for 'rpool':
2016-12-13.02:55:52 zpool create -B rpool raidz1 c1t0d0 c1t1d0 c1t3d0 c1t4d0

Disks (as you can see, some are already replaced;):

       0. c3t0d0 <WDC-WD4004FZWX-00GBGB0-81.H0A81-3.64TB>
          /pci at 0,0/pci15d9,805 at 1f,2/disk at 0,0
       1. c3t1d0 <WDC-WD4005FZBX-00K5WB0-01.01A01-3.64TB>
          /pci at 0,0/pci15d9,805 at 1f,2/disk at 1,0
       2. c3t3d0 <HGST-HUS726T4TALA6L4-VLGNW984-3.64TB>
          /pci at 0,0/pci15d9,805 at 1f,2/disk at 3,0
       3. c3t4d0 <WDC-WD4005FZBX-00K5WB0-01.01A01-3.64TB>
          /pci at 0,0/pci15d9,805 at 1f,2/disk at 4,0

tsoome at beastie:~$ zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 08:46:27 with 0 errors on Sun Feb  9 22:21:49 2025
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c3t0d0  ONLINE       0     0     0
            c3t1d0  ONLINE       0     0     0
            c3t3d0  ONLINE       0     0     0
            c3t4d0  ONLINE       0     0     0

errors: No known data errors
tsoome at beastie:~$ zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  14,5T  10,0T  4,48T        -         -    31%    69%  1.00x    ONLINE  -
tsoome at beastie:~$ 

rgds,
toomas



> Quoting Reginald Beardsley via openindiana-discuss <openindiana-discuss at openindiana.org>:
> 
>> Simple. I wanted double parity for rpool but only single parity for spool, my scratch space. That allowed more space for spool. The benefit was 2.3 TB for rpool and 4.2 TB for spool. An extra 1.4 TB of scratch space.
>> 
>> I'm a former reflection seismic research scientist/programmer. Many times I have come to work only to find that the overnight job had failed because I filled my dedicated 2 TB RAID filesystem in 2006-2007. Seismic eats space like people eat popcorn. I was creating models of the rock properties in the GoM, 600 miles EW, 300 miles NS 6 miles deep sampled at 25 m horizontally and 200 ft vertically. This was a 9 month project with a brick wall deadline set by the Dept of Interior Minerals Management Services lease deadline. All cobbled together on the fly. There were about a dozen or so physical attributes for which I constructed those massive models.
>> 
>> 
>> My redline for a job is "runs overnight". Things get hairy when you are looking at 1-2 week runtimes. Which are the norm for 3D seismic processing image formation runs. 50,000 cores in the cluster for the job.
>> 
>>> zpool status
>> pool: rpool
>> state: ONLINE
>> scan: scrub repaired 0B in 0 days 00:32:35 with 0 errors on Fri Jun 20 16:35:03 2025
>> config:
>> 
>> NAME STATE READ WRITE CKSUM
>> rpool ONLINE 0 0 0
>> raidz2-0 ONLINE 0 0 0
>> c0t5000039A7B78030Fd0s1 ONLINE 0 0 0
>> c0t5000039A7B7002F3d0s1 ONLINE 0 0 0
>> c0t5000039A7B700306d0s1 ONLINE 0 0 0
>> c0t5000039A7B700305d0s1 ONLINE 0 0 0
>> cache
>> c5t001B448B48E32A6Dd0s0 ONLINE 0 0 0
>> 
>> errors: No known data errors
>> 
>> pool: spool
>> state: ONLINE
>> scan: scrub repaired 0B in 0 days 00:01:16 with 0 errors on Fri Jun 20 16:03:58 2025
>> config:
>> 
>> NAME STATE READ WRITE CKSUM
>> spool ONLINE 0 0 0
>> raidz1-0 ONLINE 0 0 0
>> c0t5000039A7B78030Fd0s3 ONLINE 0 0 0
>> c0t5000039A7B7002F3d0s3 ONLINE 0 0 0
>> c0t5000039A7B700306d0s3 ONLINE 0 0 0
>> c0t5000039A7B700305d0s3 ONLINE 0 0 0
>> cache
>> c5t001B448B48E32A6Dd0s1 ONLINE 0 0 0
>> 
>> errors: No known data errors
>>> 
>> 
>> Have fun!
>> Reg
>>     On Sunday, August 17, 2025 at 01:25:02 PM CDT, Rolf M. Dietze <rmd at orbit.in-berlin.de> wrote:
>> 
>> Hi there,
>> 
>> 
>> well, I must admit, I am a bit dazzled by your installation
>> problmes as since I read this post of yours on a freshly
>> installed OpenIndiana box. What am I mising. Hardware is a
>> Dell lOptiplex 9020 that prtdiag lists as:
>> System Configuration: Dell Inc. OptiPlex 9020
>> BIOS Configuration: Dell Inc. A25 05/30/2019
>> 
>> ==== Processor Sockets ====================================
>> 
>> Version                          Location Tag
>> -------------------------------- --------------------------
>> Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz SOCKET 0
>> 
>> ==== Memory Device Sockets ================================
>> 
>> Type        Status Set Device Locator      Bank Locator
>> ----------- ------ --- ------------------- ----------------
>> DDR3        in use 0  DIMM3
>> DDR3        in use 0  DIMM1
>> DDR3        in use 0  DIMM4
>> DDR3        in use 0  DIMM2
>> 
>> ==== On-Board Devices =====================================
>> "Intel HD Graphics"
>> NETWORK_NAME_STRING
>> 
>> ==== Upgradeable Slots ====================================
>> 
>> ID  Status    Type            Description
>> --- --------- ---------------- ----------------------------
>> 1  available PCI Exp. Gen 3 x16 X16
>> 2  in use    PCI Exp. Gen 2 x1 X1
>> 3  available PCI              PCI
>> 4  available PCI Exp. Gen 2 x4 X4
>> 
>> 
>> well ok, it is a cheap all purpose business desktop system
>> with internal IntelHD some 32G Ram an a quad core i5 at 3.3GHz
>> not that sophisticated, but it runs well. It has 2 internal
>> SSDs used as a mirrored rpool, $HOMEs are supplied via NFS
>> from a Dell PowerEdge R720 (Openindiana as well) with two
>> SSDs for rpool mirror and 16 disks set up as raidz3. I had
>> to flash the controler so that I can use zfs insted of the
>> Dell internal what ever raid. If you count the number of
>> disks, well, I removed the dvd rom and tape and did put the
>> SSDs there.
>> For software side, I threw away the mate stuff, cause I
>> personally don' t like it, I went with xdm as login manager
>> and whatever flavor of twm, fvwm etc. There is a decent firefox
>> runnig thanks to Carsten. This was important cause we run a
>> lot of web based applications here. I added gcc4, gfortran
>> and gnu-ada as packages from opencsw, as well as texlive,
>> xfig, xpdf etc from other sources. It took about 2 hours or
>> so, to set the box up. Mostly I use the tet installer, time
>> to time I start up the gui-installer for installations,
>> which is slower.
>> 
>> I must admit, I just used the supplied installer, clicked
>> thru it to install the os on one disk and set up the rest
>> later on by script. Most things I took from the package
>> server of Openindiana, I don't even have to create a manifest
>> for xdm any more as it is supplied by now. It was just plain
>> sailing going through the installer supplied.
>> 
>> As I understood, you partitioned all of your 4 disks to have 2
>> partitions per disk and creating one zpool with 4 partitions
>> across all 4 disks and a second zpool set up by the other
>> partitions across all 4 physical disks
>> 
>> I am a bit puzzled as of what advantage you do expect? As
>> since if one disk fails, it is 2 pools being affected, if 2
>> disks do fail, the box is gone. Having rotating harddisks
>> partitioned and the partitions in different pools, the pools
>> limit one to the other in performance, having the disks heads
>> jump around and of course, zfs disables the disk cache as
>> in those configs as well. I wonder what benefits are left
>> that I would imagine to be that interesting to accept this
>> rather slow and low redundant configuration.
>> 
>> Perhaps you tell a bit more on why this setup is preferred
>> since well, why not put 2 rather small disks in an rpool
>> mirror and if space is an issue, add an external diskarray
>> but I guess, experimenting an a fileserver, aka need for
>> lots of diskspace might not be an issue.
>> 
>> Even with the SmartOS virtualizers I run, I boot them off
>> a USB stick (not by net as supported) and have the zones
>> running on the boxes internal disks in some kind of raidz
>> config. If the USB stick fails, the virtualizers fall back
>> to net boot. In fact somthing like auto client (a Sun product
>> some times back) would be nice:)
>> 
>> As said, I am a bit puzzled about your config. Please explain
>> 
>> regards, Rolf
>> 
>> Quoting Reginald Beardsley via openindiana-discuss 
>> <openindiana-discuss at openindiana.org>:
>> 
>>> I should like note that my battle with the installation process left 
>>> me so dispirited that I didn't actually bring the system online and 
>>> migrate onto it for over 2 years and was hobbling along on a Debian 
>>> 10 system I built just to provide remote support to a friend. When 
>>> Firefox started crashing constantly I finally started the move, 
>>> though I still have a lot of files to move so I can permanently 
>>> shutdown some of my systems.
>>> 
>>> In my case I was building out a Z840 with space for 4x 4 TB HDs. I 
>>> wanted a RAIDZ2 rpool and a RAIDZ1 scratch pool. That meant i *HAD* 
>>> to make 2 slices on each drive. At the time I had 30+ years of 
>>> experience as a sys admin. 2-3 years of VMS on a mIcroVAX II "world 
>>> box" followed by years of Unix workstations of many flavors. I 
>>> presume setting up a new system will take several days. Several 
>>> weeks was a bit much.
>>> 
>>> I've never lost a battle with a computer, but that ordeal left less 
>>> than eager to fight them. Especially poorly conceived and maintained 
>>> installers such as the current OI installer.
>>> 
>>> Why is "Install to existing partition" not given as a option in the 
>>> installer? Desktop icons with no broken links is a huge FAIL. I know 
>>> some have been fixed, but IIRC when I tried the current installer on 
>>> a test system there were still icons with broken links. Fortunately, 
>>> "pkg update" worked well. If it had not I might well have abandoned 
>>> OI entirely and simply gone to S10_u8 in an air gapped environment.
>>> 
>>> Before an installation disk image is created "SOMEONE* needs to test 
>>> it with a moderately complex configuration. Assume the user is a 
>>> competent admin. There should be no missing pieces. Ideally several 
>>> people should do it on different HW. I have several Z400s which are 
>>> set up to allow me to swap disks. I keep all my old disks just for 
>>> install testing. In the IDE drive case I have ~2 dozen old disks in 
>>> trays, but am no longer running any IDE systems. With a 3 disk 
>>> trayless SATA bay in a Z400 I can do reasonably complex setups. And 
>>> I have a talent for breaking installers.
>>> 
>>> If I am given reasonable notice I should be able to do a test 
>>> install or two of varying complexity each time a new LiveCD image is 
>>> ready for release.
>>> 
>>> If someone at my skill level has a problem with the install process 
>>> it is *badly* broken. The OI install process does not compare well 
>>> to Debian or any other distro. Someone with 20 years of Linux 
>>> experience is not likely to get a good impression of OI when they 
>>> hit broken desktop icons.
>>> 
>>> Sadly, McNealy et al gave themselves such a lavish change of control 
>>> packages that IBM backed out and Oracle bought Sun instead. Having 
>>> built AIX to compete with SunOS IBM would not have abandoned it as 
>>> Oracle has. Suffice to say early AIX was less than robust. Not sure 
>>> about now as I've been far away from AIX for over 20 years. The 
>>> reason Linux is now dominant in several use contexts is the billion 
>>> dollars IBM committed to enhancements to Linux.
>>> 
>>> 
>>> Have Fun!
>>> Reg
>>> 
>>> 
>>> 
>>> On Friday, August 15, 2025 at 09:10:34 PM CDT, Atiq Rahman 
>>> <atiqcx at gmail.com> wrote:
>>> 
>>> 
>>> Hi Till,
>>> Thanks for the reply.
>>> 
>>>> I am not aware on the non-CSM UEFI issue
>>> For me, this has been an issue. At present, the OI installer does not
>>> present an option to install in UEFI system without dedicating a whole disk
>>> to it.
>>> 
>>> As a workaround, I am choosing *Install_to_ExistingPool* by pressing F5. If
>>> you are following mailing lists you're probably seeing I am reinventing the
>>> wheel: manually taking care of stuff the installer is not doing when that
>>> option is chosen.
>>> 
>>> New users might not have that much energy to go through all that to add an
>>> OS (or they might call it distro) to their portable device.
>>> 
>>> [snip]
>>> _______________________________________________
>>> openindiana-discuss mailing list
>>> openindiana-discuss at openindiana.org
>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
>> 
>> 
>> 
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> 
> 
> 
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the openindiana-discuss mailing list