[OpenIndiana-discuss] HBA failover

Sebastian Gabler sequoiamobil at gmx.net
Tue Jun 18 12:34:35 UTC 2013


Am 18.06.2013 06:15, schrieb openindiana-discuss-request at openindiana.org:
> Message: 7
> Date: Mon, 17 Jun 2013 17:00:37 -0700
> From: Richard Elling<richard.elling at richardelling.com>
> To: Discussion list for OpenIndiana
> 	<openindiana-discuss at openindiana.org>
> Subject: Re: [OpenIndiana-discuss] HBA failover
> Message-ID:<B4BF5130-A0A2-4D91-ADA4-CF7F86616B1F at RichardElling.com>
> Content-Type: text/plain;	charset=us-ascii
>
> On Jun 17, 2013, at 1:36 PM, Sebastian Gabler<sequoiamobil at gmx.net>  wrote:
>
>> >Dear Bill, Peter, Richard, and Saso.
>> >
>> >Thanks for the great comments.
>> >
>> >Now, changing to reverse gear, isn't it more likely to loose data by having a pool that spans across mutiple HBAs than if you connect all drives to a single HBA?
> That has an easy answer: yes. But you were asking about data availability, which is not
> the same as data loss.
Good point. That helps to sort the thought.
>
>> >I mean, unless you make sure that there are never any more drives served by one HBA alone (single-ported SATA drives) in a leaf VDEV than can be tolerated by the provided redundancy, a VDEV in the pool could become unavailable upon HBA failure, ultimately leading to loss of the whole pool?
> In ZFS, if a pool's redundancy cannot be assured, then the pool goes into the FAULTED
> state and I/O is suspended until the fault is cleared. In many cases, there is no data loss, even
> though the data is unavailable while the pool is FAULTED.
>
> In general, HBAs do not have long-lived persistent storage. The data passes through them to
> the disks. ZFS can recover from the typical failure modes where data is lost in-flight without a
> commitment to media (see the many posts on cache flush behaviour)
The underlying aspect to my thought is if pool fault is immediately 
guaranteed from a hardware failure.
There is still ZFS allocating writes dynamically depending on the 
workload. So, it may occur that not all writes hit every vdev.  If that 
happens, ZFS might note only later that one of the vdevs has gone in the 
meantime.
That is what I thought I had seen when my pool died slowly.  Again, I 
could be wrong with that.
OTOH, it's not a surprise that a mirrored rpool would be resilient - 
because that is a design offering redundancy on the root node, so it 
doesn't matter if one side falls by a disk, link, or hba failure.  A 
pool of concatenated, redundant leafs can't have redundancy on the root 
node - as ZFS doesn't support nested vdevs.


BR,

Sebastian




More information about the OpenIndiana-discuss mailing list