[OpenIndiana-discuss] Poor relative performance of SAS over SATA drives
Hong Wei Liam
weiliam.hong at gmail.com
Wed Oct 26 23:58:45 UTC 2011
I had initially tried only having the SAS drives installed with the same results.
Later, the SATA drives were added in for comparison.
Regards,
WL
On Oct 27, 2011, at 2:27 AM, Jason J. W. Williams wrote:
> Is the card hosting any SATA and SAS drives on the same port, or are
> they segregated SAS on one and SATA on the other?
>
> -J
>
> On Wed, Oct 26, 2011 at 9:01 AM, weiliam.hong <weiliam.hong at gmail.com> wrote:
>> Greetings,
>>
>> I have a fresh installation of OI151a:
>> - SM X8DTH, 12GB RAM, LSI 9211-8i (latest IT-mode firmware)
>> - pool_A : SG ES.2 Constellation (SAS)
>> - pool_B : WD RE4 (SATA)
>> - no settings in /etc/system
>>
>>
>> *zpool status output*
>> -------------------
>> admin at openindiana:~# zpool status
>> pool: pool_A
>> state: ONLINE
>> scan: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> pool_A ONLINE 0 0 0
>> mirror-0 ONLINE 0 0 0
>> c7t5000C50035062EC1d0 ONLINE 0 0 0
>> c8t5000C50034C03759d0 ONLINE 0 0 0
>>
>> pool: pool_B
>> state: ONLINE
>> scan: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> pool_B ONLINE 0 0 0
>> mirror-0 ONLINE 0 0 0
>> c1t50014EE057FCD628d0 ONLINE 0 0 0
>> c2t50014EE6ABB89957d0 ONLINE 0 0 0
>>
>>
>> *Load generation via 2 concurrent dd streams:*
>> --------------------------------------------------
>> dd if=/dev/zero of=/pool_A/bigfile bs=1024k count=1000000
>> dd if=/dev/zero of=/pool_B/bigfile bs=1024k count=1000000
>>
>>
>> *Initial Observation*
>> -------------------
>>
>> capacity operations bandwidth
>> pool alloc free read write read write
>> ---------- ----- ----- ----- ----- ----- -----
>> pool_A 1.68G 2.72T 0 652 0 73.4M
>> mirror 1.68G 2.72T 0 652 0 73.4M
>> c7t5000C50035062EC1d0 - - 0 619 0 73.4M
>> c8t5000C50034C03759d0 - - 0 619 0 73.4M
>> ---------- ----- ----- ----- ----- ----- -----
>> pool_B 1.54G 1.81T 0 1.05K 0 123M
>> mirror 1.54G 1.81T 0 1.05K 0 123M
>> c1t50014EE057FCD628d0 - - 0 1.02K 0 123M
>> c2t50014EE6ABB89957d0 - - 0 1.01K 0 123M
>>
>> *
>> 10-15mins later*
>> ------------=--
>>
>> capacity operations bandwidth
>> pool alloc free read write read write
>> ---------- ----- ----- ----- ----- ----- -----
>> pool_A 15.5G 2.70T 0 50 0 6.29M
>> mirror 15.5G 2.70T 0 50 0 6.29M
>> c7t5000C50035062EC1d0 - - 0 62 0 7.76M
>> c8t5000C50034C03759d0 - - 0 50 0 6.29M
>> ---------- ----- ----- ----- ----- ----- -----
>> pool_B 28.0G 1.79T 0 1.07K 0 123M
>> mirror 28.0G 1.79T 0 1.07K 0 123M
>> c1t50014EE057FCD628d0 - - 0 1.02K 0 123M
>> c2t50014EE6ABB89957d0 - - 0 1.02K 0 123M
>>
>>
>>
>> Questions:
>> 1. Why does SG SAS drives degrade to <10 MB/s while WD RE4 remain consistent
>> at >100MB/s after 10-15 min?
>> 2. Why does SG SAS drive show only 70+ MB/s where is the published figures
>> are > 100MB/s refer here
>> <http://www.seagate.com/www/en-us/products/enterprise-hard-drives/constellation-es/constellation-es-2/#tTabContentSpecifications>?
>> 3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
>> driver is used. Are SAS and SATA drives handled differently ?
>>
>>
>> This is a test server, so any ideas to try and help me understand greatly
>> appreciated.
>>
>>
>> Many thanks,
>> WL
>>
>>
>>
>>
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the OpenIndiana-discuss
mailing list