[OpenIndiana-discuss] ISCSI SSD Performance

Sašo Kiselkov skiselkov.ml at gmail.com
Mon Feb 18 08:18:09 UTC 2013


On 02/18/2013 05:18 AM, Grant Albitz wrote:
> I would like to discuss one more item:
> 
> Based on the writes below vs the reads it seems like I am able to get more data out of a w/s as apposed to a read per second. I may just be misunderstanding the results but the disks themselves are rated for higher read performance then write. I am just wondering if there is a zfs tunable I can play with. The disks seem have much more queued up during writes then reads as well.
> 
> 
> time dd if=/dev/zero of=/PSC.Net/dd.tst bs=2048000 count=131027
> r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t24d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
>     0.0 1057.2    0.0 89869.6  0.0  9.6    0.0    9.1   1  67 c1t0d0
>     0.0 1057.2    0.0 89852.5  0.0  9.6    0.0    9.1   1  67 c1t1d0
>     0.0 1057.4    0.0 89946.6  0.0  9.6    0.0    9.1   1  66 c1t2d0
>     0.0 1057.0    0.0 89912.4  0.0  9.6    0.0    9.1   1  67 c1t3d0
>     0.0 1038.0    0.0 87068.1  0.0  9.5    0.0    9.1   1  66 c1t4d0
>     0.0 1037.2    0.0 87015.8  0.0  9.5    0.0    9.1   1  66 c1t5d0
>     0.0 1037.0    0.0 87076.6  0.0  9.5    0.0    9.2   1  66 c1t6d0
> 
> time dd if=/PSC.Net/dd.tst of=/dev/null bs=2048000
>   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c1t24d0
>     0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c2t0d0
>   884.1    0.0 38373.3    0.0  0.0  0.3    0.0    0.4   1  13 c1t0d0
>   873.7    0.0 38425.6    0.0  0.0  0.3    0.0    0.4   1  14 c1t1d0
>   883.5    0.0 38399.9    0.0  0.0  0.3    0.0    0.4   1  13 c1t2d0
>   875.1    0.0 38408.2    0.0  0.0  0.3    0.0    0.4   1  13 c1t3d0
>   876.5    0.0 38528.1    0.0  0.0  0.3    0.0    0.4   1  13 c1t4d0
>   880.7    0.0 38520.2    0.0  0.0  0.3    0.0    0.4   1  14 c1t5d0
>   874.9    0.0 38510.4    0.0  0.0  0.3    0.0    0.4   1  13 c1t6d0

That PERC H710 is probably really messing with your results here. First
of all, the transparent compression of SSDs seems to be helping you out
in the first instance. You'll need to circumvent that sucker (e.g. by
writing compressed files).

Next, I suggest you get rid of that H710 if possible, stick the H310 in
and configure it in JBOD mode - we can try to resolve the driver issues,
there's been some work in Illumos on this lately, so you might be able
to build some custom driver modules (or I can do it for you).

Also, I take it that the speeds you quoted earlier are not "Gbps"
(gigabits per second) but "GBps" (gigabytes per second) - that's a
factor of 8x difference. 1.1-1.2 GBps means you totally maxed out your
10 gigabit ethernet link (1.2 GBps x 8 bits = 9.6 Gbps). OTOH, if you
were trying to do this locally, you probably still had room to go, SSDs
typically can do ~250-500MB/s in reads and writes locally.

Cheers,
--
Saso



More information about the OpenIndiana-discuss mailing list