[OpenIndiana-discuss] ZFS pool slow as molasses
Brett Dikeman
brett.dikeman at gmail.com
Tue Oct 5 19:23:57 UTC 2010
Greetings all,
I have an OpenSolaris system I upgraded to OpenIndiana last night, and
I'm not sure if it coincides, but I'm having a ton of problems with
disk performance on a 4x2TB SATA drive RAID-Z pool which is separate
from the system/boot pool (on an SSD.) There are no SATA multipliers,
and no SAS components; all four drives are plugged into the
motherboard SATA ports, I believe.
De-duplication and compression were turned on; I disabled
de-duplication, with no effect (we weren't seeing any dedupe anyway.)
I've upgraded ZFS and the pools, with no effect.
Symptoms:
-Maintenance jobs from our backup software started taking forever
(this involves scanning a compressed archive) and the count for
simultaneous backup sessions rose as well (backups are
client-initiated every hour.)
-zpool scrub on the pool in question runs at about 100-200KB/sec,
instead of the more typical 150-200MB/sec (the SSD will complete a
scrub at normal speeds.)
-watching the drive indicators physically, all four drives are
performing what appears to be a huge amount of random IO during the
scrub.
-iowait is always zero and CPU usage is in the single digits.
And and all suggestions are heartily accepted, particularly since I'm
a relative newbie in the Solaris world. We're dead in the water at
the moment, and the scrub isn't going to finish in my lifetime at this
rate. More info below.
Thanks!
Brett
uname output:
SunOS d2d 5.11 oi_147 i86pc i386 i86pc Solaris
scanpci information for the SATA controller:
pci bus 0x0000 cardnum 0x1f function 0x02: vendor 0x8086 device 0x3a20
Intel Corporation 82801JI (ICH10 Family) 4 port SATA IDE Controller #1
pci bus 0x0000 cardnum 0x1f function 0x05: vendor 0x8086 device 0x3a26
Intel Corporation 82801JI (ICH10 Family) 2 port SATA IDE Controller #2
Sample output from zpool iostat 1:
data 5.55T 1.70T 53 92 126K 198K
rpool 18.9G 10.6G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
data 5.55T 1.70T 58 8 135K 13.5K
rpool 18.9G 10.6G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
data 5.55T 1.70T 69 0 164K 0
rpool 18.9G 10.6G 0 0 0 0
---------- ----- ----- ----- ----- ----- -----
Sample output from iostat:
# iostat -cxnz 1
cpu
us sy wt id
0 1 0 99
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
4.2 5.5 270.7 50.9 0.0 0.0 3.0 2.1 1 1 c7d0
64.9 7.4 81.9 27.7 7.5 2.0 103.4 27.1 96 99 c8d0
65.4 7.9 87.7 29.1 3.9 1.3 52.6 17.6 59 67 c7d1
65.4 7.5 80.8 27.8 3.8 1.3 52.4 17.5 59 66 c10d0
64.9 7.8 89.3 29.1 7.5 2.0 102.8 27.0 96 100 c8d1
cpu
us sy wt id
0 0 0 100
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
64.0 26.0 59.5 52.0 7.1 2.0 79.0 21.9 96 100 c8d0
0.0 39.0 0.0 76.0 0.1 0.0 1.5 0.7 1 1 c7d1
0.0 28.0 0.0 64.5 0.0 0.0 1.1 0.7 1 1 c10d0
63.0 25.0 64.5 46.0 7.0 2.0 80.0 22.2 95 99 c8d1
cpu
us sy wt id
0 1 0 99
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
45.0 21.0 35.5 29.0 4.8 1.7 73.4 26.3 74 99 c8d0
90.0 13.0 83.5 9.0 5.1 1.6 49.5 15.7 74 88 c7d1
93.0 11.0 74.0 8.0 5.2 1.6 49.8 15.5 74 87 c10d0
46.0 25.0 39.5 40.5 5.0 1.7 69.7 24.6 75 100 c8d1
cpu
us sy wt id
0 1 0 99
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
72.0 0.0 57.0 0.0 8.0 2.0 111.0 27.8 100 100 c8d0
101.0 0.0 93.0 0.0 6.6 1.8 65.5 17.7 89 90 c7d1
96.0 0.0 78.5 0.0 6.4 1.8 66.7 18.5 87 89 c10d0
71.0 0.0 63.5 0.0 8.0 2.0 112.6 28.2 100 100 c8d1
More information about the OpenIndiana-discuss
mailing list