[OpenIndiana-discuss] Zpool crashes system on reboot and import

CJ Keist cj.keist at colostate.edu
Tue Dec 24 14:52:03 UTC 2013


Just update to close the message thread out.

Last night got some downtime to fix this damaged pool.  So what I did 
was to export both of my data pools (data and data2).  data was my bad 
pool and data2 currently holds all the data from the data pool.  I then 
re-imported data pool without auto mounting the ZFS FS's, and that 
worked just fine.  From the kernel crash logs I was able to identify the 
problem ZFS FS that was causing the kernel to crash.  Since I already 
have full backups of  all the data on data2 pool I simple destroyed that 
ZFS FS and then mounted all the ZFS FS's on that pool, and that worked 
just fine.  Last I imported the data2 pool and now I have my system back 
up.
    Next will rsync all the changes from data2 pool back to my data pool 
and I should be all set to go with this server.  Just hope I don't hit 
this bug in ZFS again.



On 12/12/13, 9:48 AM, Stephan Budach wrote:
> Am 12.12.13 17:36, schrieb CJ Keist:
>>
>>
>> On 12/12/13, 9:06 AM, Udo Grabowski (IMK) wrote:
>>> On 12/12/2013 16:14, Stephan Budach wrote:
>>>> Am 12.12.13 15:18, schrieb Jim Klimov:
>>>>> On 2013-12-12 14:38, Stephan Budach wrote:
>>>>>> So basically, I am still running with this work around of setting the
>>>>>> affected fs to read-only, before I export the zpool.
>>>>>> This server is under constant load and I just don't have the time and
>>>>>> resources to move all 370+ ZFS fs onto another storage.
>>>>>
>>>>> And how did you manage to set the read-only attribute the first time?
>>>>> Were there any problems or tricks involved? As CJ suggested, one
>>>>> wouldn't be able to do this on a pool imported read-only... did you
>>>>> import it without mounts indeed?
>>>>>
>>>>> //Jim
>>>>>
>>>> You surely can set the readonly attribute for a ZFS fs on a read-only
>>>> mounted zpool. Mounting the zpool readonly only seems to affect the
>>>> global setting. It seems to possible to change the ZFS FS attributes
>>>> without any issue. So the work around was…
>>>>
>>>> zpool import -o ro zpool
>>>> zfs set readonly=on zpool/zfs
>>>> zpool export zpool
>>>> zpool import zpool
>>>> zfs set readonly=off zpool/zfs
>>>>
>>>> This has always worked for me and it still does.
>>>
>>> Would be interesting to know under which cicumstances this problem
>>> appears. I saw from one of the crash dumps that there was a scrub
>>> active, could it be that this happens on servers which go down with
>>> an active scrub on that pool and fail to reactivate the scrub then ?
>>>
>>
>> If looking at my post the Scrub was started after the system crashed.
>> I wanted to see if a scrub might fix the issue of importing this data
>> pool in.  But when I saw the scrub was going to take 60+hours, I had
>> to re-export it out and back in read only so I could start migrating
>> data to a new location to keep the downtime to a minimum.
>>
>> I not sure what caused the initial crash, I know I was working at the
>> time through the web gui of NappIt, I think my last action on the web
>> gui was to show all logical volumes.
> I just had a look at my SR from 2 years ago and I was performing a
> Solaris update back then. When I tried to unmount that zpool, this fs
> wouldn't and claimed to be busy, for no apparent reason, so I finally
> forced the zpool to export.
>
> That was, when this issue started on that particular fs after the
> follwoing reboot.
>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss

-- 
C. J. Keist                     Email: cj.keist at colostate.edu
Systems Group Manager           Solaris 10 OS (SAI)
Engineering Network Services    Phone: 970-491-0630
College of Engineering, CSU     Fax:   970-491-5569
Ft. Collins, CO 80523-1301

All I want is a chance to prove 'Money can't buy happiness'



More information about the OpenIndiana-discuss mailing list