[OpenIndiana-discuss] summary: in desperate need of fsck.zfs
Jim Klimov
jimklimov at cos.ru
Wed Jul 25 18:26:39 UTC 2012
2012-07-25 19:16, Ray Arachelian wrote:
First of all, I'm glad my suggestions have helped, at least :)
> On 07/25/2012 10:42 AM, Gregory Youngblood wrote:
>> Assuming the faulted drive or pool is not rpool, containing required
>> files for the system, why should a faulty drive or pool hang the
>> entire box? Why can't the system return an error and continue?
I think it is a bug for the system to hang upon requests to a
faulty pool - unless this behavior was requested with "failmode".
From what I gather below, the box itself no longer hangs upon
hitting problems (but some zfs/zpool commands still do?)
>> On Jul 25, 2012, at 6:35 AM, Ray Arachelian <ray at arachelian.com> wrote:
>
> Not sure. It did a kernel panic with the previous version I had on
> there which I just upgraded a couple of days ago. I think it was 151a,
> now it's running 151a5 and hasn't panicked when it hit the bad files.
There was a bug I reported, a fix for which I hope was included
in oi_151a5 (I'm not sure though), regarding deduped datasets
used without verification.
If you had written data marked as deduped (even if there was a
single copy of the file), and then a block of this file went
corrupted on-disk, and you rewrote this file with a copy from
backup, there could previously be some bad side-effects causing
kernel panics.
>
> Now when it loses access to the JBOD (and it just did), it doesn't hang
> the entire machine, but any zpool commands lock up and you can't kill
> them. So I'm sure that when I get home tonight, I can remove the zfs
> cache file, then power cycle the machine, add the current bad file to
> the exclude list and kick off rsync again, etc.
>
> I'd reboot it now, but the target zpool I'm trying to copy the data to
> is on the same jbod has failmode to wait instead of continue. I'll fix
> this when I get home. Wish this box had a DRAC or an ILO, would have
> made life easier. :)
For remote reboots you could use an UPS (good idea on a storage
box anyway) with management via LAN or COM/USB ports. You can
then use NUT or other toolsets to reboot the UPS on request ;)
With good luck, an access point with Linux-based firmware with
NUT could be your UPS controller ;)
> Probably would be a better idea if I moved the
> target zpool to another jbod or internally come to think of it so I
> don't corrupt it with all these USB disconnects.
Didn't you say the JBOD has an eSATA port, and you're getting
hold of an HBA with such a port? You might re-hang the JBOD
to a different link technology to see if that helps.
>
> Would be nice if I could clear the zpool cache without rebooting... but
> it is what it is.
Technically, the pool should be excluded from the cache whenever
you (successfully) do an explicit "zpool export".
---
Replying to your other letter:
> When I tried to import the pool in r/w mode, it hung - maybe it would
> have finished, but I gave up waiting after 10 minutes, so mounted it
> read only with: zpool import -F -f -o readonly=on data and that
> returned immediately and I was able to see the files.
There might be some processing, like deferred-delete, which takes
place upon pool import. If there's much data marked as deleted
(especially on deduped datasets), the processing on smaller
systems can take days and several reboots, due to kernel running
out of RAM and not going to swap. (Fixes for this or similar
cases were discussed, and maybe integrated as of current distro -
I am not sure again). Mounting read-only, obviously, skips these
deferred-delete processings, and maybe prohibits scrub as well.
> I then reran my rsync script and I see it hung the box in the same spot,
> so that file has a bad block or bad metadata.
Hanging should not happen (unless requested by failmode), and
an error code should be returned for the read-request instead.
HTH,
//Jim
More information about the OpenIndiana-discuss
mailing list