[OpenIndiana-discuss] Recommendations for fast storage

Jay Heyl jay at frelled.us
Tue Apr 16 20:48:11 UTC 2013


On Tue, Apr 16, 2013 at 11:54 AM, Jim Klimov <jimklimov at cos.ru> wrote:

> On 2013-04-16 20:30, Jay Heyl wrote:
>
>> What would be the logic behind mirrored SSD arrays? With spinning platters
>> the mirrors improve performance by allowing the fastest of the mirrors to
>> respond to a particular command to be the one that defines throughput.
>> With
>>
>
> Well, to think up a rationale: it is quite possible to saturate a bus
> or an HBA with SSDs, leading to increased latency in case of intense
> IO just because some tasks (data packets) are waiting in queue waiting
> for the bottleneck to dissolve. If another side of the mirror has a
> different connection (another HBA, another PCI bus) then IOs can go
> there - increasing overall performance.
>

This strikes me as a strong argument for carefully planning the arrangement
of storage devices of any sort in relation to HBAs and buses. It seems
significantly less strong as an argument for a mirror _maybe_ having a
different connection and responding faster.

My question about the rationale behind the suggestion of mirrored SSD
arrays was really meant to be more in relation to the question from the OP.
I don't see how mirrored arrays of SSDs would be effective in his
situation.

Personally, I'd go with RAID-Z2 or RAID-Z3 unless the computational load on
the CPU is especially high. This would give you as good as or better fault
protection than mirrors at significantly less cost. Indeed, given his
scenario of write early, read often later on, I might even be tempted to go
for the new TLC SSDs from Samsung. For this particular use the much reduced
"lifetime" of the devices would probably not be a factor at all. OTOH,
given the almost-no-limits budget, shaving $100 here or there is probably
not a big consideration. (And just to be clear, I would NOT recommend the
TLC SSDs for a more general solution. It was specifically the write-few,
read-many scenario that made me think of them.)

Basically, this answer stems from logic which applies to "why would we
> need 6Gbit/s on HDDs?" Indeed, HDDs won't likely saturate their buses
> with even sequential reads. The link speed really applies to the bursts
> of IO between the system and HDD's caches. Double bus speed roughly
> halves the time a HDD needs to keep the bus busy for its portion of IO.
> And when there are hundreds of disks sharing a resource (an expander
> for example), this begins to matter.


It's actually not all that difficult to saturate a 6Gb/s pathway with ZFS
when there are multiple storage devices on the other end of that path. No
single HDD today is going to come close to needing that full 6Gb/s, but put
four or five of them hanging off that same path and that ultra-super
highway starts looking pretty congested. Put SSDs on the other end and the
6Gb/s pathway is going to quickly become your bottleneck.


More information about the OpenIndiana-discuss mailing list