[OpenIndiana-discuss] zfs_arc_max, l2arc size, smartd questions
lavr
lavr at jinr.ru
Wed May 30 08:10:01 UTC 2018
Hi All,
the first of all, want to say thanks to illumos & openindiana
developers for good job.
I have a few questions that I have not found the answer:
Server:
# uname -a
SunOS zfsnoc1 5.11 illumos-0b2e825398 i86pc i386 i86pc
# cat /etc/release
OpenIndiana Hipster 2018.04 (powered by illumos)
OpenIndiana Project, part of The Illumos Foundation (C)
2010-2018
Use is subject to license terms.
Assembled 27 April 2018
#
I have server with RAM=48GB and installed OpenIndiana:
# echo "::memstat" |mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 790309 3087 6%
Boot pages 165 0 0%
ZFS File Data 58749 229 0%
Anon 28826 112 0%
Exec and libs 1318 5 0%
Page cache 5006 19 0%
Free (cachelist) 4091 15 0%
Free (freelist) 11690060 45664 93%
Total 12578524 49134
Physical 12578523 49134
#
Above: Physical=49134MB , Free=45664MB
With empty /etc/system, default c_max/c_min:
# kstat -p zfs:0:arcstats | grep :c_m
zfs:0:arcstats:c_max 44007690240
zfs:0:arcstats:c_min 5500961280
#
In Solaris guids, I found that default zfs_arc_max = allmem - 1GB,
but i have ram=48G and default c_max=41G.
Ok, after search illumos sources, found that default arc_c_max:
/* set max to 3/4 of all memory, or all but 1GB, whichever is more */
6172 if (allmem >= 1 << 30)
6173 arc_c_max = allmem - (1 << 30);
...
Now i want to set zfs_arc_max=45097156608 (42G)
# egrep -v '^\*|^$' /etc/system
set zfs:zfs_arc_max=45097156608
set zfs:zfs_arc_min=2147483648
#
reboot
# kstat -p zfs:0:arcstats | grep :c_m
zfs:0:arcstats:c_max 44007690240
zfs:0:arcstats:c_min 2147483648
#
I can reduce zfs_arc_max < 44007690240 , but cant set it more than
default arc_c_max=44007690240. Why?
c_max dont change, question1: can i set zfs_arc_max > arc_c_max
(default) ?
Is default arc_c_max = allmem - (1 << 30) == MAX LIMIT that I cant
change?
For L2ARC, i must reserve some space in RAM for l2arc buffers, and found
next advice:
(L2ARC size in kilobytes) / (typical recordsize -- or volblocksize -- in
kilobytes) * 70 bytes = ARC header size in RAM
but some peoples said that in new ZFS instead "70 bytes" used 200 or 400
bytes
L2ARC_size / recordsize(volblocksize) * 400 = ARC header size in RAM
question2: Is it true L2ARC calculation?
3. i have installed smartmontools from IPS
# pkg list | grep smart
storage/smartmontools 6.6-2018.0.0.0 i--
#
- comment default "DEVICESCAN" and add to /etc/smartd.conf only one
record for SAS HDD:
# egrep -v '^#|^$' /etc/smartd.conf
/dev/rdsk/c4t5000CCA027A6C581d0s0 -d scsi -H -s
(S/../.././22|L/../../6/23) -m lavr at jinr.ru
#
# svcs smartd
STATE STIME FMRI
disabled 9:25:20 svc:/system/smartd:default
#
If i start smartd service, i'll have problem with loop start-stop
smartd:
# svcadm enable smartd
# ps axuww | grep smartd | wc -l
586
# ps axuww | grep smartd | wc -l
1559
# svcadm disable smartd
# ps axuww | grep smartd | wc -l
1955
# pkill -9 smartd
# tail /var/svc/log/system-smartd\:default.log
[ May 30 10:38:29 Stopping because all processes in service exited. ]
[ May 30 10:38:29 Executing start method ("/usr/sbin/smartd "). ]
[ May 30 10:38:29 Stopping because all processes in service exited. ]
[ May 30 10:38:29 Executing start method ("/usr/sbin/smartd "). ]
[ May 30 10:38:29 Stopping because all processes in service exited. ]
[ May 30 10:38:29 Executing start method ("/usr/sbin/smartd "). ]
[ May 30 10:38:29 Stopping because all processes in service exited. ]
[ May 30 10:38:29 Executing start method ("/usr/sbin/smartd "). ]
[ May 30 10:38:29 Stopping because service disabled. ]
[ May 30 10:38:29 Executing stop method (:kill). ]
#
Run smartd manually:
- check config
# /usr/sbin/smartd -d
smartd 6.6 2017-11-05 r4594 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke,
www.smartmontools.org
Opened configuration file /etc/smartd.conf
Configuration file /etc/smartd.conf parsed.
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, opened
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, [HGST HUS724030ALS640
A1C4], lu id: 0x5000cca027a6c580, S/N: P8JYR8EV, 3.00 TB
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, is SMART capable. Adding to
"monitor" list.
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, state read from
/var/lib/smartmontools/smartd.HGST-HUS724030ALS640-P8JYR8EV.scsi.state
Monitoring 0 ATA/SATA, 1 SCSI/SAS and 0 NVMe devices
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, opened SCSI device
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, SMART health: passed
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, state written to
/var/lib/smartmontools/smartd.HGST-HUS724030ALS640-P8JYR8EV.scsi.state
^\smartd received signal 3: Quit
Device: /dev/rdsk/c4t5000CCA027A6C581d0s0, state written to
/var/lib/smartmontools/smartd.HGST-HUS724030ALS640-P8JYR8EV.scsi.state
smartd is exiting (exit status 0)
- run as daemon
# /usr/sbin/smartd -i 28800 -c /etc/smartd.conf
# ps axuww | grep smartd
root 7270 0.0 0.0 4952 976 ? S 10:56:35 0:00
/usr/sbin/smartd -i 28800 -c /etc/smartd.conf
#
No problem
Whats wrong? Why smf run smartd in loop and how i can debug this?
Thanks
--
lavr
More information about the openindiana-discuss
mailing list