[OpenIndiana-discuss] HBA failover

Richard Elling richard.elling at richardelling.com
Tue Jun 18 00:00:37 UTC 2013


On Jun 17, 2013, at 1:36 PM, Sebastian Gabler <sequoiamobil at gmx.net> wrote:

> Dear Bill, Peter, Richard, and Saso.
> 
> Thanks for the great comments.
> 
> Now, changing to reverse gear, isn't it more likely to loose data by having a pool that spans across mutiple HBAs than if you connect all drives to a single HBA?

That has an easy answer: yes. But you were asking about data availability, which is not
the same as data loss.

> I mean, unless you make sure that there are never any more drives served by one HBA alone (single-ported SATA drives) in a leaf VDEV than can be tolerated by the provided redundancy, a VDEV in the pool could become unavailable upon HBA failure, ultimately leading to loss of the whole pool?

In ZFS, if a pool's redundancy cannot be assured, then the pool goes into the FAULTED
state and I/O is suspended until the fault is cleared. In many cases, there is no data loss, even
though the data is unavailable while the pool is FAULTED.

In general, HBAs do not have long-lived persistent storage. The data passes through them to
the disks. ZFS can recover from the typical failure modes where data is lost in-flight without a
commitment to media (see the many posts on cache flush behaviour)

> That is, given that the failure of the HBA would not lead to an immediate crash of the host which would make it identical to the previous scenario. I'd claim that such failures are probably not handled, and so the consequences are not predictable.

I have direct experience where an HBA failed for the root pool. I ran for about an hour before I
got bored and rebooted. The HBA failed POST and was replaced. But thanks to ZFS awesomeness,
I didn't lose data. But since the only available data to the OS was the data in ARC, there wasn't 
much I could do if it required reading a file from disk (eg /usr/bin/ls)

> Similar scenarios are feasible if one disk shelf dies completely, and the pool spans across more than one.

Yes, there are well-known techniques for managing diversity for availability.

> I have personally seen a single vdev in a pool to go down by drive incompatibility, and when I had to decide to give up the pool or try recovery, I got the impression from iostat that there probably had been transactions to the remaining VDEVs, making the recovery forensics. Not sure if this was indeed accurate, but then I was jumping to the conclusion that an immediate, hard crash would have been preferable over a slow melt-down. Prejudice or fact?

I am not familiar with your case, so cannot render an opinion. In my career I've seen very few
protected ZFS pool losses that were due to single disk failures. For SVM, LVM, and RAID 
controllers... the record is nowhere near as good.
 -- richard

--

Richard.Elling at RichardElling.com
+1-760-896-4422





More information about the OpenIndiana-discuss mailing list