[OpenIndiana-discuss] Zpool crashes system on reboot and import

Stephan Budach stephan.budach at jvm.de
Thu Dec 12 16:48:37 UTC 2013


Am 12.12.13 17:36, schrieb CJ Keist:
>
>
> On 12/12/13, 9:06 AM, Udo Grabowski (IMK) wrote:
>> On 12/12/2013 16:14, Stephan Budach wrote:
>>> Am 12.12.13 15:18, schrieb Jim Klimov:
>>>> On 2013-12-12 14:38, Stephan Budach wrote:
>>>>> So basically, I am still running with this work around of setting the
>>>>> affected fs to read-only, before I export the zpool.
>>>>> This server is under constant load and I just don't have the time and
>>>>> resources to move all 370+ ZFS fs onto another storage.
>>>>
>>>> And how did you manage to set the read-only attribute the first time?
>>>> Were there any problems or tricks involved? As CJ suggested, one
>>>> wouldn't be able to do this on a pool imported read-only... did you
>>>> import it without mounts indeed?
>>>>
>>>> //Jim
>>>>
>>> You surely can set the readonly attribute for a ZFS fs on a read-only
>>> mounted zpool. Mounting the zpool readonly only seems to affect the
>>> global setting. It seems to possible to change the ZFS FS attributes
>>> without any issue. So the work around was…
>>>
>>> zpool import -o ro zpool
>>> zfs set readonly=on zpool/zfs
>>> zpool export zpool
>>> zpool import zpool
>>> zfs set readonly=off zpool/zfs
>>>
>>> This has always worked for me and it still does.
>>
>> Would be interesting to know under which cicumstances this problem
>> appears. I saw from one of the crash dumps that there was a scrub
>> active, could it be that this happens on servers which go down with
>> an active scrub on that pool and fail to reactivate the scrub then ?
>>
>
> If looking at my post the Scrub was started after the system crashed.  
> I wanted to see if a scrub might fix the issue of importing this data 
> pool in.  But when I saw the scrub was going to take 60+hours, I had 
> to re-export it out and back in read only so I could start migrating 
> data to a new location to keep the downtime to a minimum.
>
> I not sure what caused the initial crash, I know I was working at the 
> time through the web gui of NappIt, I think my last action on the web 
> gui was to show all logical volumes.
I just had a look at my SR from 2 years ago and I was performing a 
Solaris update back then. When I tried to unmount that zpool, this fs 
wouldn't and claimed to be busy, for no apparent reason, so I finally 
forced the zpool to export.

That was, when this issue started on that particular fs after the 
follwoing reboot.




More information about the OpenIndiana-discuss mailing list