[OpenIndiana-discuss] write speeds faster with no ZIL and L2ARC

Lucas Van Tol catseyev9 at hotmail.com
Sat Jun 25 22:53:52 UTC 2011


You might want to look at the a_svct and %w  / %b times on iostat -xn 1 ;
The a_svct should be very low on the intel SSD's; preferably less than one.
You might also want to look with only one of the SSD's working at a time; either ARC or ZIL.  

Offhand; this sounds a bit odd, especially since a single set of 12 disks in raidz2 isn't particularly fast.
If your F40 is actually honoring cache flushes as expected; it would not actually have very fast random IO due to being MLC drive; the X25-e may still be faster.
Perhaps putting one or 2 of the F40's as cache; and one/two of the x25-e's as logs might work better?
If you have the internal slots and hardware to spare; you can add multiple log/cache devices to a single pool.

-Lucas Van Tol

> Date: Fri, 24 Jun 2011 18:19:23 -0700
> From: cmosetick at gmail.com
> To: openindiana-discuss at openindiana.org
> Subject: [OpenIndiana-discuss] write speeds faster with no ZIL and L2ARC
> 
> Hi,
> 
> *Problem:*
> write speeds are faster when no L2ARC or ZIL is configured.
> 
> *Our current setup:*
> We are currently running OpenIndiana b148 (upgraded from b134.) Supermicro
> X8DTH-i/6/iF/6F, single Xeon E5504, 24GB ram. A single, main storage pool is
> running pool version 28, populated with 12 WD RE 7200rpm SATA disks in a
> RAIDZ2. This pool has two 32GB Intel X25-E SSD's for ZIL and L2ARC connected
> directly to SATA ports on the motherboard. The entire system has been in
> operation for about one year with minimal issues. About a week ago we
> started seeing slow write performance so troubleshooting began.
> 
> *What we have done so far / what we know:*
> We removed the ZIL and L2ARC SSD's from the server, zpool remove tank c6t1d0
> c6t0d0
> and connected them to a windows machine and ran Intel SSD
> Tool<http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=18455>on
> them. (a windows only application)
> 
> Using Intel SSD Toolbox 2.0.2.000, we see the following values;
> 
> 09 Power-On Hours Count:
>   ZIL:        Raw: 6783
>   L2ARC: Raw: 8562
> 
> E9 Media Wearout Indicator:
>   ZIL:         Raw: 0  Normalized: 99  Threshold: 0
>   L2ARC:  Raw: 0  Normalized: 99  Threshold: 0
> 
> E1 Host Writes
>   ZIL:         Raw: 47 TB  Normalized: 200  Threshold: 0
>   L2ARC:  Raw: 67 TB  Normalized: 199  Threshold: 0
> 
> Looking at this, we can only conclude that either;
>  1) Intel X25-E drives have no "wear" even after ~50-60 TB of writes
>  2) The wearout indicator is broken and unreliable.
> 
> Here are some write tests we have performed using rsync to transfer a 3.5GB
> ISO file from my workstation over Gigabit Ethernet to a file system in this
> server. All tests go to the same file system unless otherwise noted, and
> after each test the already transfered bits were removed from the server.
> 
> TRANSFER AMOUNT          TIME TAKEN          NOTES/CONDITIONS
> 1GB                                            6:00min
> tank/shares/sw which has (compression=gzip-6) X25-E ZIL and L2ARC are
> present
> 1GB                                            50sec
> rpool/home/chris (two sata disks in mirror, no compression)
> 1GB                                            3:30
> tank pool without L2ARC
> 1GB                                            1:30
> tank pool no L2ARC and no ZIL
> 500MB                                        4:30
> tank pool, brand new ZIL, Corsair F40GB2 (40GB)
> 500MB                                        3:50
> tank pool, new ZIL and new L2ARC, both Corsair F40GB2
> 800MB                                        6:00
> tank pool, new L2ARC and new ZIL have now had ~3 hours to "warm up",
> l2arcstat went from 47MB to 22GB
> 
> So our problem is that even with brand new SSD's, that have MUCH higher
> maximum write speeds then are "old" SSD's, transfers happen to the storage
> pool quicker *without* configured log and cache devices, then when they are
> being used. FWIW, looking at iostat -exn while running one of the rsync
> tests above, the time things are taking seem to match up on the kw/s column.
> 
> Can anyone provide insight to this slow write speed situation when ZIL and
> L2ARC is present?
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
 		 	   		  


More information about the OpenIndiana-discuss mailing list