[OpenIndiana-discuss] Problems with openindiana 151a based ISCSI server.
David Anderson
mr.anderson.neo at gmail.com
Tue Sep 20 21:50:17 UTC 2011
I have a DELL R720 w/ 48GB RAM and 2 CPU/4 cores (E5503 @2.0Ghz)
2X Quad port Broadcom 5709C cards in PXIx4 Slots
2X LSI 9200-8e cards in the PCIx8 Slots
Zpool is made up of 4 -24 bay chassis with Hitachi 2TB SATA disks. Pool
is setup as mirrored pair vdevs with about 4 drives left out as spares.
The drives are cabled so as to have one mirror on each controller.
I have two X25-E SSD's which i was using as ZIL but I have since set the zfs
option sync=disabled which looks as if it bypassed ZIL compeletely, but I am
still seeing very poor iscsi performance.
iperf testing between initiator and target machines are good., 930+Mbs
bi-directionally. Jumbo frames are enabled and flow control is enabled on
the switch.
I can read/write from the zpool using dd if=somefile of=someotherfile
bs=128k conv=fsync and I get decent throughput from the disks locally
350-500MB/s range depending on the block sizes used.
However doing the same tests on remote clients yields IO rates in the
20-30MB range.
On the client sides I am noticing a fairly common TCP retransmission rate,
using dtrace it is quite common to see iscsi traffic being retransmitted.
I sort of suspect that it is the slower dual core CPUs, but I am not sure
where to look. WIth a R610 I had idle, which has same onboard broadcom
cards but faster quad core E5602 CPUs, I dont see any problem was easily
able to run ISCSI volumes with dd at ~112MB/s wirespeed., but it was only a
test environment (same switching gear as production machine).
Using Dtrace I measured read/write latency on the ISCSI target machine.
~9ms read/1ms write. Average IOPS measured by io provider is ~1000 read
iops, 2000 write iops. When I am testing with dd locally I can generally
see at least 15,000 iops doing sequential data with dd through the zfs
filesystem.
Does anyone have any ideas where to look next? I am going to in a
downtime window later this week move the faster quad core procs from the
test machine to the production machine. I have a gut feeling that there is
something being delayed in the kernel, when interrupt traffic picks up which
is slowing things down.
With two quad port gig cards are there any other settings I should be
looking at or any other tunables for the bnx driver specifically?
Thanks,
David
--
David Anderson
mr.anderson.neo at gmail.google.com
More information about the OpenIndiana-discuss
mailing list