[OpenIndiana-discuss] Poor relative performance of SAS overSATA drives

Hong Wei Liam weiliam.hong at gmail.com
Thu Oct 27 00:10:38 UTC 2011


I tried checking with the configuration utility (via Control-C while booting), could not find any way ..

I have tried different disk backplanes on Supermicro 745TQ, 836A and 846E2 all with same results.

Regards,
WL

On Oct 27, 2011, at 8:03 AM, Jason Matthews wrote:

> 
> 
> What is the link speed of the SAS drives as reported by the controller?
> 
> j.
> 
> -----Original Message-----
> From: Hong Wei Liam [mailto:weiliam.hong at gmail.com] 
> Sent: Wednesday, October 26, 2011 4:59 PM
> To: Discussion list for OpenIndiana
> Subject: Re: [OpenIndiana-discuss] Poor relative performance of SAS overSATA
> drives
> 
> I had initially  tried only having the SAS drives installed with the same
> results.
> 
> Later, the SATA drives were added in for comparison.
> 
> Regards,
> WL
> 
> 
> On Oct 27, 2011, at 2:27 AM, Jason J. W. Williams wrote:
> 
>> Is the card hosting any SATA and SAS drives on the same port, or are
>> they segregated SAS on one and SATA on the other?
>> 
>> -J
>> 
>> On Wed, Oct 26, 2011 at 9:01 AM, weiliam.hong <weiliam.hong at gmail.com>
> wrote:
>>> Greetings,
>>> 
>>> I have a fresh installation of OI151a:
>>> - SM X8DTH, 12GB RAM, LSI 9211-8i (latest IT-mode firmware)
>>> - pool_A : SG ES.2 Constellation (SAS)
>>> - pool_B : WD RE4 (SATA)
>>> - no settings in /etc/system
>>> 
>>> 
>>> *zpool status output*
>>> -------------------
>>> admin at openindiana:~# zpool status
>>> pool: pool_A
>>> state: ONLINE
>>> scan: none requested
>>> config:
>>> 
>>>       NAME                       STATE     READ WRITE CKSUM
>>>       pool_A                     ONLINE       0     0     0
>>>         mirror-0                 ONLINE       0     0     0
>>>           c7t5000C50035062EC1d0  ONLINE       0     0     0
>>>           c8t5000C50034C03759d0  ONLINE       0     0     0
>>> 
>>> pool: pool_B
>>> state: ONLINE
>>> scan: none requested
>>> config:
>>> 
>>>       NAME                       STATE     READ WRITE CKSUM
>>>       pool_B                     ONLINE       0     0     0
>>>         mirror-0                 ONLINE       0     0     0
>>>           c1t50014EE057FCD628d0  ONLINE       0     0     0
>>>           c2t50014EE6ABB89957d0  ONLINE       0     0     0
>>> 
>>> 
>>> *Load generation via 2 concurrent dd streams:*
>>> --------------------------------------------------
>>> dd if=/dev/zero of=/pool_A/bigfile bs=1024k count=1000000
>>> dd if=/dev/zero of=/pool_B/bigfile bs=1024k count=1000000
>>> 
>>> 
>>> *Initial Observation*
>>> -------------------
>>> 
>>>              capacity     operations    bandwidth
>>> pool        alloc   free   read  write   read  write
>>> ----------  -----  -----  -----  -----  -----  -----
>>> pool_A      1.68G  2.72T      0    652      0  73.4M
>>> mirror    1.68G  2.72T      0    652      0  73.4M
>>>   c7t5000C50035062EC1d0      -      -      0    619      0  73.4M
>>>   c8t5000C50034C03759d0      -      -      0    619      0  73.4M
>>> ----------  -----  -----  -----  -----  -----  -----
>>> pool_B      1.54G  1.81T      0  1.05K      0   123M
>>> mirror    1.54G  1.81T      0  1.05K      0   123M
>>>   c1t50014EE057FCD628d0      -      -      0  1.02K      0   123M
>>>   c2t50014EE6ABB89957d0      -      -      0  1.01K      0   123M
>>> 
>>> *
>>> 10-15mins later*
>>> ------------=--
>>> 
>>>              capacity     operations    bandwidth
>>> pool        alloc   free   read  write   read  write
>>> ----------  -----  -----  -----  -----  -----  -----
>>> pool_A      15.5G  2.70T      0     50      0  6.29M
>>> mirror    15.5G  2.70T      0     50      0  6.29M
>>>   c7t5000C50035062EC1d0      -      -      0     62      0  7.76M
>>>   c8t5000C50034C03759d0      -      -      0     50      0  6.29M
>>> ----------  -----  -----  -----  -----  -----  -----
>>> pool_B      28.0G  1.79T      0  1.07K      0   123M
>>> mirror    28.0G  1.79T      0  1.07K      0   123M
>>>   c1t50014EE057FCD628d0      -      -      0  1.02K      0   123M
>>>   c2t50014EE6ABB89957d0      -      -      0  1.02K      0   123M
>>> 
>>> 
>>> 
>>> Questions:
>>> 1. Why does SG SAS drives degrade to <10 MB/s while WD RE4 remain
> consistent
>>> at >100MB/s after 10-15 min?
>>> 2. Why does SG SAS drive show only 70+ MB/s where is the published
> figures
>>> are > 100MB/s refer here
>>> 
> <http://www.seagate.com/www/en-us/products/enterprise-hard-drives/constellat
> ion-es/constellation-es-2/#tTabContentSpecifications>?
>>> 3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
>>> driver is used. Are SAS and SATA drives handled differently ?
>>> 
>>> 
>>> This is a test server, so any ideas to try and help me understand greatly
>>> appreciated.
>>> 
>>> 
>>> Many thanks,
>>> WL
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenIndiana-discuss mailing list
>>> OpenIndiana-discuss at openindiana.org
>>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>> 
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> 
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss




More information about the OpenIndiana-discuss mailing list