[OpenIndiana-discuss] ISCSI SSD Performance

Grant Albitz GAlbitz at Albitz.Biz
Sat Feb 16 21:58:31 UTC 2013


Local dd results:

write 268.343296 GB via dd, please wait...
time dd if=/dev/zero of=/PSC.Net/dd.tst bs=2048000 count=131027

131027+0 records in
131027+0 records out

real     3:09.8
user        0.1
sys      2:40.1

268.343296 GB in 189.8s = 1413.82 MB/s Write


131027+0 records in
131027+0 records out
268343296000 bytes (268 GB) copied, 146.209 s, 1.8 GB/s

real    2m26.215s
user    0m0.202s
sys     2m24.469s


so I am getting expected results locally. The network infrastructure is as follows: 

2 esxi hosts with intel x520 10g Ethernet cards. They are directly connected via twinx cables to intel x520 network cards in the zfs box. So there is no switch in between. Mtu is set on the zfs host as well as the esxi hosts to 9000. Results from both esxi hosts are the same.

Since I can get 700-800MBps write, you would assume the link could handle more than 350MBps read...

There may be some vmware issue I am hitting, but I had the same setup before with slower disks and I netted better performance in vmware. Are there any settings in Comstar I can look at that could be causing this?


-----Original Message-----
From: Sašo Kiselkov [mailto:skiselkov.ml at gmail.com] 
Sent: Saturday, February 16, 2013 4:06 PM
To: openindiana-discuss at openindiana.org
Subject: Re: [OpenIndiana-discuss] ISCSI SSD Performance

Hi Grant,

On 02/16/2013 05:14 PM, Grant Albitz wrote:
> Hi I am trying to track down a performance issue with my setup.

Always be sure to do your performance testing on the machine itself first, before going on to test through more layers of the stack (i.e.
iSCSI). What does "iostat -xn 1" report on the host machine when you do your tests? The columns are very important, be sure to read the
iostat(1M) manpage on what they mean.

> I have 24 ssds in 6 vdevs (4 drives per) that are then striped.
> Essentially a raid50. Originally I had a perc h310 and saw similar 
> numbers. I have since switched to a perch h710 and have each drive in 
> raid 0 and presented to the os. In both cases the numbers are the 
> same. I also tried 8 devices per vdev with only 3 vdevs and the 
> numbers were also the same.I am thinking I am tracking down a zfs 
> tunable but I am not sure what to try. My reads are ½ of my writes.
> Given a raid 50 I was expecting the opposite. The volume is being 
> presented as an iscsi lun. The benchmarks below were run from a vmware 
> guest. The esxi host is connected to the iscsi target via a 10g 
> interface (direct connection).
> 
> There is no l2arche, but the entire pool is ssd..

At this point you could be experiencing performance pathologies in any one of these subsystems:

1) The SSDs themselves (what model are they?)
2) JBOD cabling/backplane
3) The PERC H710 RAID logic
4) COMSTAR
5) iSCSI in-kernel implementation
6) NIC driver & NIC itself (broken/missing TOE, checksum offload, etc.)
7) Network (congestion, packet drops)

What you need to do is minimize the amount of components in this path and possibly test everything in an isolated manner.

The first order of business is to test the I/O (ZFS) subsystem on the machine itself. My recommendations:

1) Restructure your ZFS topology - if you want best performance, an
   array of mirrors (RAID-10) is best (seeing that you are going
   all-SSD, then I assume performance is critical here). If you have to
   use raidz, use a multiple-of-2 number of data drives per raidz group
   (4 data + 1 parity = 5 drives per raidz, etc.).

2) Run a bunch of local tests on the machine - seeing as you are having
   trouble with sequential reads as well, do a simple "dd" test to see
   how much raw throughput you can get from your pool.

3) Test your network via something like netperf or similar, to make
   sure your network isn't bottlenecked.

These are just some off the top of my head. There are lots of guides and videos on how to do performance analysis and optimization available on the net, see for instance Brendan Gregg's great talk on just this
matter: https://www.youtube.com/watch?v=xkDqe6rIMa0

Cheers,
--
Saso

_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss at openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the OpenIndiana-discuss mailing list