[OpenIndiana-discuss] Initial install of 2020.10 on Z840 for production use

Joshua M. Clulow josh at sysmgr.org
Sun Apr 25 21:27:00 UTC 2021


On Sun, 25 Apr 2021 at 13:46, Reginald Beardsley via
openindiana-discuss <openindiana-discuss at openindiana.org> wrote:
> I've done a fresh install using the text installer on a 14 core E5-2680 V4 system with 72 GB of ECC DRAM and a 4x 4 TB 7200 rpm RAIDZ2 array.  With reconfigure set on the post install boot it all came up fine.

It's possible that the installer is not creating the /reconfigure file
in the image that is being unpacked?  It would be worth checking to
make sure.  The first boot in a new machine after the install is
complete should always be a reconfigure reboot, even though that means
less and less these days.

> The prior install of 2021.04-rc1 gave ~220 MB/s for reads and writes and 87 MB/s for a file copy on a 3 disk RAIDZ1 for a 1 TB file size.  A dev/zero write is running very much slower than 2021.04-rc1.  On that it took a little over an hour.  This write has been running for almost 4 hrs.
>
> With a dd write of 1 TB of  /dev/zero to a file running, the GUI response is appallingly slow.  Minutes to bring up Firefox. 18 seconds to open a MATE terminal.  Unblanking the screen took almost 1 minute to bring up the screensaver login.
>
> Top shows the ARC consuming 52 GB and about 10 GB free with the CPU 95% idle.  This seems to me very strange for the response I'm getting.
>
> Relative to my Z400s or my NL40 this thing is a complete dog. It's about what an 11/780 which is badly thrashing would do.   I assume that there are system  parameters which need to be modified.  Reducing the size of the ARC seems the most probable first step.

I would not assume that tuneables is the answer.  You should look at
what the system is doing when it's performing poorly.  First, I'd take
a look with coarse-grained tools; e.g., mpstat, vmstat -p, iostat,
etc.  If the system is slower than you expect it will likely be
because of some sort of resource saturation; whether that's because
the CPUs are all fully busy, or because of memory pressure and paging,
or perhaps because everything is being serialised behind some common
lock.  Ultimately there will be a reason that it's slow in this
particular case, and it isn't generally a good assumption that it's
just some tuneable that is set incorrectly.

> For my prior testing I only had 8 GB of DRAM on a single DIMM.  I installed 4x of the fastest new Micron 16 GB DIMMs as specified by the Crucial website plus the 8 GB DIMM it came with.

I suspect it would be best to use only one size and speed of DIMM in
the system at a time.

> "mdump -e" shows no events except for 3x pci.fabric events per boot which I assume are related to missing device drivers.

I would not assume that.  What are the fabric events?   Can you get
more detail with "fmdump -eVp"?



Cheers.

-- 
Joshua M. Clulow
http://blog.sysmgr.org



More information about the openindiana-discuss mailing list