[OpenIndiana-discuss] zpool degraded

Timothy Coalson tsc5yc at mst.edu
Thu Jun 20 20:38:22 UTC 2013


On Thu, Jun 20, 2013 at 2:42 PM, alessio <alessio at ftgm.it> wrote:

> Yes but... what disk?  It is c6d0, I suppose, but why all disks are
> reported as degraded?


At a guess, since writes to a raidz vdev get striped across all component
disks, if it can't successfully reconstruct a block, it can't figure out
which disk(s) had bad data, so it marked them all as degraded?  I don't
know why this wouldn't be reported in checksum errors for the raidz vdev,
though.

It might have stopped trying to read from that disk with all the errors
before it encountered the error that it couldn't recover from (since it no
longer had any redundancy), a clear followed by a scrub might be able to
recover if this is the case (and that one disk doesn't fail further, and
successfully took the writes ZFS sent it during the scrub, and there isn't
even more bad data hiding on it...).  Checking the SMART data for the disk
with the errors reported may shed some light on the issue.

Note that raidz-1, especially with a fair number of disks, is not
particularly safe (an error encountered while replacing a failed disk is
always unrecoverable, for starters).  Consider moving to raidz-2 for your
number of disks.

Tim


More information about the OpenIndiana-discuss mailing list