[OpenIndiana-discuss] SSDs for ZFS Pool?

Jason Matthews jason at broken.net
Sat Aug 25 03:06:36 UTC 2012


> Do you have any real-life stats about SSD breakage (lifetime
> expectancy) in L2ARC usage modes - did any of your SSDs
> already reach their end-of-life, and if yes - what percentage
> and how old were they? What model(s) do you use?

The oldest round of gear uses 128GB Crucial M4s. I have seen two failures on
the M4s and neither was due to cell wear. The first failure was infant
mortality with spontaneous failure to complete writes. The second failure
had an electrical issue on the bus.

Typical l2arc wear on my databases is 10% on each L2ARC device after ten
months of operation or at present 7061 power on hours. I use three devices
or more per pool. My L2ARC exceeds the size of the live data set in all
cases. There are hundreds (and going to thousands) of databases running in
zones.

My database I/O patterns are write heavy with a large batch job which runs
essentially back to back. I have l2arc_noprefetch set to zero which in
theory would wear the l2arc devices faster. I have found it to be the right
call as it improved the l2arc hit cache ratio significantly for my i/o
pattern.

My monitoring system uses cacti, zabbix, and graphite. The write i/o
requirements are much more intense as a data pool than l2arc on the
databases. At this moment, I am at 40% utilization on each of the SSDS for
roughly the same time period (7599 power on hours).

The warranty on the devices is three years. It is clear the monitoring
system is going to burn through the disks before the warranty runs up. By
the time l2arc disks wear out on the databases I will be off of them, off
the spinning rust, and on to Intel 910s or similar devices.

I haven't had enough failures to have any meaningful statistics.


j.






More information about the OpenIndiana-discuss mailing list