[OpenIndiana-discuss] ISCSI SSD Performance
Sašo Kiselkov
skiselkov.ml at gmail.com
Sat Feb 16 21:05:57 UTC 2013
Hi Grant,
On 02/16/2013 05:14 PM, Grant Albitz wrote:
> Hi I am trying to track down a performance issue with my setup.
Always be sure to do your performance testing on the machine itself
first, before going on to test through more layers of the stack (i.e.
iSCSI). What does "iostat -xn 1" report on the host machine when you do
your tests? The columns are very important, be sure to read the
iostat(1M) manpage on what they mean.
> I have 24 ssds in 6 vdevs (4 drives per) that are then striped.
> Essentially a raid50. Originally I had a perc h310 and saw similar
> numbers. I have since switched to a perch h710 and have each drive
> in raid 0 and presented to the os. In both cases the numbers are the
> same. I also tried 8 devices per vdev with only 3 vdevs and the
> numbers were also the same.I am thinking I am tracking down a zfs
> tunable but I am not sure what to try. My reads are ½ of my writes.
> Given a raid 50 I was expecting the opposite. The volume is being
> presented as an iscsi lun. The benchmarks below were run from a
> vmware guest. The esxi host is connected to the iscsi target via a
> 10g interface (direct connection).
>
> There is no l2arche, but the entire pool is ssd..
At this point you could be experiencing performance pathologies in any
one of these subsystems:
1) The SSDs themselves (what model are they?)
2) JBOD cabling/backplane
3) The PERC H710 RAID logic
4) COMSTAR
5) iSCSI in-kernel implementation
6) NIC driver & NIC itself (broken/missing TOE, checksum offload, etc.)
7) Network (congestion, packet drops)
What you need to do is minimize the amount of components in this path
and possibly test everything in an isolated manner.
The first order of business is to test the I/O (ZFS) subsystem on the
machine itself. My recommendations:
1) Restructure your ZFS topology - if you want best performance, an
array of mirrors (RAID-10) is best (seeing that you are going
all-SSD, then I assume performance is critical here). If you have to
use raidz, use a multiple-of-2 number of data drives per raidz group
(4 data + 1 parity = 5 drives per raidz, etc.).
2) Run a bunch of local tests on the machine - seeing as you are having
trouble with sequential reads as well, do a simple "dd" test to see
how much raw throughput you can get from your pool.
3) Test your network via something like netperf or similar, to make
sure your network isn't bottlenecked.
These are just some off the top of my head. There are lots of guides and
videos on how to do performance analysis and optimization available on
the net, see for instance Brendan Gregg's great talk on just this
matter: https://www.youtube.com/watch?v=xkDqe6rIMa0
Cheers,
--
Saso
More information about the OpenIndiana-discuss
mailing list