[OpenIndiana-discuss] Zpool replacing forever
Gary Mills
gary_mills at fastmail.fm
Mon Jan 21 02:37:48 UTC 2019
On Sun, Jan 20, 2019 at 08:16:44PM +0000, Andrew Gabriel wrote:
>
> zfs is keeping the old disk around as a ghost, in case you can put it
> back in and zfs can find good copies of the corrupt data on it during
> the resilver. It will stay in the zpool until there's a clean scrub,
> after which zfs will collapse out the replacing-0 vdev. (In your case,
> you know there is no copy of the bad data so this won't help, but in
> general it can.)
I see. That's a good explanation, one that I didn't see anywhere
else. I suppose that the man page for `zpool replace' should advise
you to correct all errors before using the command. That way, the
confused status I saw would not arise.
> So to fix, delete the corrupt files, and any snapshots they're in.
> Then run a scrub.
> When the scrub finishes, ZFS will collapse out the replacing-0 vdev.
That did work. Thanks for the good advice. I found that many of the
snapshots could not be removed because they had dependant clones. In
each case, it was a BE that was the dependant clone. When I removed
the BE with beadm and did a scrub, the errors moved to another
snapshot. I wound up deleting all the previous BEs before I got a
clean status after a scrub. I suppose I could have found another way
to deal with the snapshots, but I didn't want to interfere with what
beadm expected. I did do a `zpool clear rpool' at the end. Here's
the status now:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
c5t1d0s0 ONLINE 0 0 0
errors: No known data errors
--
-Gary Mills- -refurb- -Winnipeg, Manitoba, Canada-
More information about the openindiana-discuss
mailing list