[OpenIndiana-discuss] pool messup
Robin Axelsson
gu99roax at student.chalmers.se
Mon Apr 11 09:40:39 UTC 2011
On 2011-04-11 10:28, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> I have this box for backup storage, urd-backup, zfs receiving a copy of another system, urd. on urd-backup, I created the following pool
>
> zpool create -f urd-backup \
> raidz2 c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0 c5t7d0 c5t8d0 \
> raidz2 c5t9d0 c5t10d0 c5t11d0 c5t12d0 c5t13d0 c5t14d0 c5t15d0 c5t16d0 c5t17d0 \
> raidz2 c5t18d0 c5t19d0 c5t20d0 c5t21d0 c5t22d0 c5t23d0 c5t24d0 c5t25d0 c5t26d0 \
> raidz2 c5t27d0 c5t28d0 c5t29d0 c5t30d0 c5t31d0 c5t32d0 c5t33d0 c5t34d0 c5t35d0
>
> Something recently happened, and the pool was suddenly showing lots of bad drives. Also, it now shows the drives in different order than the original, which I find rather alarming. I also get an error message when trying to zpool replace a drive to attempt a forced resilver of that. See below the zpool status output for this.
>
> Does anyone know what on earth could have caused this, and how I can fix it?
>
> root at urd-backup:~# zpool status urd-backup
> pool: urd-backup
> state: DEGRADED
> status: One or more devices are faulted in response to persistent errors.
> Sufficient replicas exist for the pool to continue functioning in a
> degraded state.
> action: Replace the faulted device, or use 'zpool clear' to mark the device
> repaired.
> scan: scrub in progress since Mon Apr 11 10:25:44 2011
> 14.4M scanned out of 37.7T at 14.4M/s, (scan is slow, no estimated time)
> 0 repaired, 0.00% done
> config:
>
> NAME STATE READ WRITE CKSUM
> urd-backup DEGRADED 0 0 0
> raidz2-0 DEGRADED 0 0 0
> c5t14d0 ONLINE 0 0 0
> c5t16d0 DEGRADED 0 0 0 too many errors
> c5t18d0 ONLINE 0 0 0
> c5t23d0 DEGRADED 0 0 0 too many errors
> c5t27d0 ONLINE 0 0 0
> c5t5d0 FAULTED 0 0 0 corrupted data
> c5t32d0 ONLINE 0 0 0
> c5t20d0 DEGRADED 0 0 0 too many errors
> c5t26d0 ONLINE 0 0 0
> raidz2-1 ONLINE 0 0 0
> c5t30d0 ONLINE 0 0 0
> c5t31d0 ONLINE 0 0 0
> c5t12d0 ONLINE 0 0 0
> c5t13d0 ONLINE 0 0 0
> c5t0d0 ONLINE 0 0 0
> c5t1d0 ONLINE 0 0 0
> c5t2d0 ONLINE 0 0 0
> c5t3d0 ONLINE 0 0 0
> c5t4d0 ONLINE 0 0 0
> raidz2-2 DEGRADED 0 0 0
> c5t5d0 ONLINE 0 0 0
> c5t6d0 ONLINE 0 0 0
> c5t7d0 ONLINE 0 0 0
> c5t8d0 ONLINE 0 0 0
> c5t9d0 ONLINE 0 0 0
> c5t10d0 ONLINE 0 0 0
> c5t11d0 ONLINE 0 0 0
> c5t15d0 ONLINE 0 0 0
> c5t26d0 FAULTED 0 0 0 too many errors
> raidz2-3 DEGRADED 0 0 0
> c5t19d0 ONLINE 0 0 0
> c5t21d0 ONLINE 0 0 0
> c5t22d0 ONLINE 0 0 0
> c5t24d0 ONLINE 0 0 0
> c5t31d0 FAULTED 0 0 0 corrupted data
> c5t25d0 ONLINE 0 0 0
> c5t33d0 FAULTED 0 0 0 too many errors
> c5t28d0 ONLINE 0 0 0
> c5t29d0 DEGRADED 0 0 0 too many errors
>
> errors: No known data errors
> root at urd-backup:~# zpool replace -f urd-backup c5t5d0 c5t5d0
> invalid vdev specification
> the following errors must be manually repaired:
> /dev/dsk/c5t5d0s0 is part of active ZFS pool urd-backup. Please see zpool(1M).
> root at urd-backup:~#
>
>
It sounds like a bit too much of coincidence that 8 drives have failed.
Maybe a port extender has failed. It would be of interest to know what
these 8 drives have in common.
More information about the OpenIndiana-discuss
mailing list