[OpenIndiana-discuss] ISCSI SSD Performance
Jim Klimov
jimklimov at cos.ru
Sat Feb 16 17:14:52 UTC 2013
As a technical terminology nit - all components in a zfs pool
are "vdevs", albeit of different levels. "Leaf vdevs" (disks,
slices, files) are aggregated into top-level vdevs (singular,
mirrors, raidzN) and a pool is striped over them.
All in all, if the different vdev topologies yield the same
results, it seems like there is some bottleneck (bus bandwidth,
HBA performance peak, etc.)
In your tests below there is a string that concerns me: "0fill"
Does your test write 0x00's into the pool? It is possible that
these blocks are compressed into null allocations (only a meta
data entry is created, but no block on storage). This should
certainly be the case if you have compression enabled on the
test dataset.
This would explain somewhat why writes are fast - though there
is little reason why null-block reads should be slower.
Also, in terms of performance testing, do you have async or
sync IO? Is the tested data volume for random IO much larger
than the storage server's RAM? It is possible that your async
writes land into RAM and return quickly, while reads have to
go onto disk for that metadata at least and wait for it (even
more latency if there is a non-null data block to retrieve).
This should of course be the case if there's such a large data
set in testing that cached metadata expires from cache (note
that by default only 25% max of ARC holds metadata, tunable).
If your tests effectively fit in RAM of the storage, then
surely the results are strange...
On 2013-02-16 17:14, Grant Albitz wrote:
> Hi I am trying to track down a performance issue with my setup.
>
> I have 24 ssds in 6 vdevs (4 drives per) that are then striped. Essentially a raid50. Originally I had a perc h310 and saw similar numbers. I have since switched to a perch h710 and have each drive in raid 0 and presented to the os. In both cases the numbers are the same. I also tried 8 devices per vdev with only 3 vdevs and the numbers were also the same.I am thinking I am tracking down a zfs tunable but I am not sure what to try. My reads are ½ of my writes. Given a raid 50 I was expecting the opposite. The volume is being presented as an iscsi lun. The benchmarks below were run from a vmware guest. The esxi host is connected to the iscsi target via a 10g interface (direct connection).
>
> There is no l2arche, but the entire pool is ssd..
>
>
> -----------------------------------------------------------------------
> CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
> Crystal Dew World : http://crystalmark.info/
> -----------------------------------------------------------------------
> * MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
>
> Sequential Read : 361.519 MB/s
> Sequential Write : 690.307 MB/s
> Random Read 512KB : 298.297 MB/s
> Random Write 512KB : 517.125 MB/s
> Random Read 4KB (QD=1) : 19.731 MB/s [ 4817.0 IOPS]
> Random Write 4KB (QD=1) : 18.025 MB/s [ 4400.7 IOPS]
> Random Read 4KB (QD=32) : 174.766 MB/s [ 42667.5 IOPS]
> Random Write 4KB (QD=32) : 209.150 MB/s [ 51061.9 IOPS]
>
> Test : 50 MB [C: 10.8% (32.3/299.7 GB)] (x1) <0Fill>
> Date : 2013/02/16 11:07:10
> OS : Windows 8 Professional [6.2 Build 9200] (x64)
>
>
>
> -----------------------------------------------------------------------
> CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
> Crystal Dew World : http://crystalmark.info/
> -----------------------------------------------------------------------
> * MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
>
> Sequential Read : 351.646 MB/s
> Sequential Write : 652.356 MB/s
> Random Read 512KB : 287.946 MB/s
> Random Write 512KB : 468.861 MB/s
> Random Read 4KB (QD=1) : 19.915 MB/s [ 4862.2 IOPS]
> Random Write 4KB (QD=1) : 17.263 MB/s [ 4214.7 IOPS]
> Random Read 4KB (QD=32) : 169.849 MB/s [ 41466.9 IOPS]
> Random Write 4KB (QD=32) : 189.730 MB/s [ 46320.8 IOPS]
>
> Test : 100 MB [C: 10.8% (32.2/299.7 GB)] (x1) <0Fill>
> Date : 2013/02/16 11:05:07
> OS : Windows 8 Professional [6.2 Build 9200] (x64)
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
--
+============================================================+
| |
| Климов Евгений, Jim Klimov |
| технический директор CTO |
| ЗАО "ЦОС и ВТ" JSC COS&HT |
| |
| +7-903-7705859 (cellular) mailto:jimklimov at cos.ru |
| CC:admin at cos.ru,jimklimov at gmail.com |
+============================================================+
| () ascii ribbon campaign - against html mail |
| /\ - against microsoft attachments |
+============================================================+
More information about the OpenIndiana-discuss
mailing list