[OpenIndiana-discuss] Zfs import fails (OpenIndiana-discuss Digest, Vol 31, Issue 28)

Jason Matthews jason at broken.net
Fri Feb 8 17:40:15 UTC 2013


wow. this would be my initial hale mary play. 

rm the zfs cache file in etc/zfs
devfsadm -C
does touch /reconfigure still work? do it
reboot -p (fast boot is not my favorite)

might be worth checking the cabling. 

then try the import. after that i'd come to this list. 

were there any firmware upgrades involved?

j. 

Sent from Jasons' hand held

On Feb 7, 2013, at 6:09 PM, Ong Yu-Phing <ong.yu.phing at group.ong-ong.com> wrote:

> I had a similar situation where after export, my disk paths were wrongly picked up by ZFS, even though they were recognised by the system.
> 
> ===
> The zpool import fails with this sort of message:
>        pool1       UNAVAIL  insufficient replicas
>          mirror-0  UNAVAIL  corrupted data
>            c3t0d0  ONLINE
>            c3t1d0  ONLINE
> 
> When I check the disk labels on the disks:
> 
> zdb -l /dev/dsk/c3t0d0s0
> --------------------------------------------
> LABEL 0
> --------------------------------------------
>    version: 28
>    name: 'pool1'
>    state: 1
>    txg: 7173951
>    pool_guid: 8370873525947507187
>    hostid: 13162267
>    hostname: 'openindiana'
>    top_guid: 15064987019855796782
>    guid: 4751459059166773513
>    vdev_children: 3
>    vdev_tree:
>        type: 'mirror'
>        id: 0
>        guid: 15064987019855796782
>        metaslab_array: 30
>        metaslab_shift: 34
>        ashift: 9
>        asize: 1998985625600
>        is_log: 0
>        create_txg: 4
>        children[0]:
>            type: 'disk'
>            id: 0
>            guid: 4751459059166773513
>            path: '/dev/dsk/c2t0d0s0'
>            devid: 'id1,sd at n600605b002e26410183d783b0e56515a/a'
>            phys_path: '/pci at 0,0/pci8086,3410 at 9/pci1014,3c7 at 0/sd at 0,0:a'
>            whole_disk: 1
>            DTL: 4119
>            create_txg: 4
>        children[1]:
>            type: 'disk'
>            id: 1
>            guid: 7277976899319815787
>            path: '/dev/dsk/c2t1d0s0'
>            devid: 'id1,sd at n600605b002e264101839b39f123c1210/a'
>            phys_path: '/pci at 0,0/pci8086,3410 at 9/pci1014,3c7 at 0/sd at 1,0:a'
>            whole_disk: 1
>            DTL: 4118
>            create_txg: 4
> 
> So I noticed that even though the disk is c3t0d0, the path shows in the disk label as c2t0d0s0 (and similarly for the "child"/partner mirrored disk, c3t1d0, yet path shows as c2t1d0), is this what is causing the really strange behaviour, when even though the devices are considered ONLINE, the pool itself is UAVAIL?
> ===
> 
> Maybe you can check if you are hitting the same situation? Unfortunately, I never was able to fix this (caused by a live upgrade of OI from 148 to 151a3); instead, I booted from a live USB, then did a ZFS send to transfer the filesystems to another server.
> 
> On 07/02/2013 21:47, Ram Chander wrote:
>> "format" detects all 108 disks . I suspect it could be zfs issue. I ran out
>> of options now.
>> 
>> root at host1:~# format
>> Searching for disks...done
>> AVAILABLE DISK SELECTIONS:
>>        0. c1t0d0 <DELL-PERCH700-2.10 cyl 54665 alt 2 hd 255 sec 252>
>>           /pci at 0,0/pci8086,340b at 4/pci1028,1f17 at 0/sd at 0,0
>>        1. c5t1d0 <Coraid-EtherDrive SRX42-V0.4-2.73TB>
>>           /ethdrv/sd at 1,0
>>        2. c5t1d1 <Coraid-EtherDrive SRX42-V0.4-2.73TB>
>>           /ethdrv/sd at 1,1
>>        3. c5t1d2 <Coraid-EtherDrive SRX42-V0.4-2.73TB>
>>           /ethdrv/sd at 1,2
>>        4. c5t1d3 <Coraid-EtherDrive SRX42-V0.4-2.73TB>
>>           /ethdrv/sd at 1,3
>>        5. c5t1d4 <Coraid-EtherDrive SRX42-V0.4-2.73TB>
>>           /ethdrv/sd at 1,4
>> ................  etc
>> 
>> root at host1~# devfsadm -Cv
>> root at host1:~#
>> 
>> 
>> 
>> On Thu, Feb 7, 2013 at 6:44 PM, Sa?o Kiselkov <skiselkov.ml at gmail.com>wrote:
>> 
>>> You have an issue with conectivity to your drives on the Coraid HBA
>>> card. I suggest querying your HBA via its management tools to make sure
>>> you can discover all the drives on your network. Chances are, they're
>>> not all visible, which is why your pool is having trouble.
>>> 
>>> --
>>> Saso
>>> 
>>> On 02/07/2013 01:49 PM, Ram Chander wrote:
>>>> The drives are in Coraid  connected to the server via Coraid 10G HBA
>>> card.
>>>> Its exported and imported on the same host but after OS upgrade ( format
>>> OS
>>>> disk and install again ). Before OS upgrade, zpool export was issued and
>>>> when tried to import it faults as below. The pool is not functional in
>>> any
>>>> system now.  Tried -d /dev/dsk option but no luck.
>>>> 
>>>> 
>>>> root at storage1:~# zpool import
>>>>    pool: pool1
>>>>      id: 10136140719439709374
>>>>   state: FAULTED
>>>>  status: The pool was last accessed by another system.
>>>>  action: The pool cannot be imported due to damaged devices or data.
>>>>         The pool may be active on another system, but can be imported
>>> using
>>>>         the '-f' flag.
>>>>    see: http://illumos.org/msg/ZFS-8000-EY
>>>>  config:
>>>> 
>>>>         pool1  FAULTED  corrupted data
>>>>           raidz1-0   ONLINE
>>>>             c5t1d0   UNAVAIL  corrupted data
>>>>             c5t1d1   UNAVAIL  corrupted data
>>>>             c5t1d2   UNAVAIL  corrupted data
>>>>             c5t1d3   UNAVAIL  corrupted data
>>>>             c5t1d4   UNAVAIL  corrupted data
>>>>           raidz1-1   ONLINE
>>>>             c5t1d5   UNAVAIL  corrupted data
>>>>             c5t1d6   UNAVAIL  corrupted data
>>>>             c5t1d7   UNAVAIL  corrupted data
>>>>             c5t1d8   UNAVAIL  corrupted data
>>>>             c5t1d9   UNAVAIL  corrupted data
>>>>           raidz1-2   ONLINE
>>>>             c5t1d10  UNAVAIL  corrupted data
>>>>             c5t1d11  UNAVAIL  corrupted data
>>>>             c5t1d12  UNAVAIL  corrupted data
>>>>             c5t1d13  UNAVAIL  corrupted data
>>>>             c5t1d14  UNAVAIL  corrupted data
>>>>           raidz1-3   ONLINE
>>>>             c5t1d15  UNAVAIL  corrupted data
>>>>             c5t1d16  UNAVAIL  corrupted data
>>>>             c5t1d17  UNAVAIL  corrupted data
>>>>             c5t1d18  UNAVAIL  corrupted data
>>>>             c5t1d19  UNAVAIL  corrupted data
>>>>           raidz1-4   ONLINE
>>>>             c5t1d20  UNAVAIL  corrupted data
>>>>             c5t1d21  UNAVAIL  corrupted data
>>>>             c5t1d22  UNAVAIL  corrupted data
>>>>             c5t1d23  UNAVAIL  corrupted data
>>>>             c5t1d24  UNAVAIL  corrupted data
>>>>           raidz1-5   ONLINE
>>>>             c5t1d25  UNAVAIL  corrupted data
>>>>             c5t1d26  UNAVAIL  corrupted data
>>>>             c5t1d27  UNAVAIL  corrupted data
>>>>             c5t1d28  UNAVAIL  corrupted data
>>>>             c5t1d29  UNAVAIL  corrupted data
>>>>           raidz1-6   ONLINE
>>>>             c5t1d30  UNAVAIL  corrupted data
>>>>             c5t1d31  UNAVAIL  corrupted data
>>>>             c5t1d32  UNAVAIL  corrupted data
>>>>             c5t1d33  UNAVAIL  corrupted data
>>>>             c5t1d34  UNAVAIL  corrupted data
>>>>           raidz1-7   ONLINE
>>>>             c5t2d0   UNAVAIL  corrupted data
>>>>             c5t2d1   UNAVAIL  corrupted data
>>>>             c5t2d2   UNAVAIL  corrupted data
>>>>             c5t2d3   UNAVAIL  corrupted data
>>>>             c5t2d4   UNAVAIL  corrupted data
>>>>           raidz1-8   ONLINE
>>>>             c5t2d5   UNAVAIL  corrupted data
>>>>             c5t2d6   UNAVAIL  corrupted data
>>>>             c5t2d7   UNAVAIL  corrupted data
>>>>             c5t2d8   UNAVAIL  corrupted data
>>>>             c5t2d9   UNAVAIL  corrupted data
>>>>           raidz1-9   ONLINE
>>>>             c5t2d10  UNAVAIL  corrupted data
>>>>             c5t2d11  UNAVAIL  corrupted data
>>>>             c5t2d12  UNAVAIL  corrupted data
>>>>             c5t2d13  UNAVAIL  corrupted data
>>>>             c5t2d14  UNAVAIL  corrupted data
>>>>           raidz1-10  ONLINE
>>>>             c5t2d15  UNAVAIL  corrupted data
>>>>             c5t2d16  UNAVAIL  corrupted data
>>>>             c5t2d17  UNAVAIL  corrupted data
>>>>             c5t2d18  UNAVAIL  corrupted data
>>>>             c5t2d19  UNAVAIL  corrupted data
>>>>           raidz1-11  ONLINE
>>>>             c5t2d20  UNAVAIL  corrupted data
>>>>             c5t2d21  UNAVAIL  corrupted data
>>>>             c5t2d22  UNAVAIL  corrupted data
>>>>             c5t2d23  UNAVAIL  corrupted data
>>>>             c5t2d24  UNAVAIL  corrupted data
>>>>           raidz1-12  ONLINE
>>>>             c5t2d25  UNAVAIL  corrupted data
>>>>             c5t2d26  UNAVAIL  corrupted data
>>>>             c5t2d27  UNAVAIL  corrupted data
>>>>             c5t2d28  UNAVAIL  corrupted data
>>>>             c5t2d29  UNAVAIL  corrupted data
>>>>           raidz1-13  ONLINE
>>>>             c5t2d30  UNAVAIL  corrupted data
>>>>             c5t2d31  UNAVAIL  corrupted data
>>>>             c5t2d32  UNAVAIL  corrupted data
>>>>             c5t2d33  UNAVAIL  corrupted data
>>>>             c5t2d34  UNAVAIL  corrupted data
>>>>           raidz1-14  ONLINE
>>>>             c5t3d0   ONLINE
>>>>             c5t3d1   ONLINE
>>>>             c5t3d2   ONLINE
>>>>             c5t3d3   ONLINE
>>>>             c5t3d4   ONLINE
>>>>           raidz1-15  ONLINE
>>>>             c5t3d5   ONLINE
>>>>             c5t3d6   ONLINE
>>>>             c5t3d7   ONLINE
>>>>             c5t3d8   ONLINE
>>>>             c5t3d9   ONLINE
>>>>           raidz1-16  ONLINE
>>>>             c5t3d10  ONLINE
>>>>             c5t3d11  ONLINE
>>>>             c5t3d12  ONLINE
>>>>             c5t3d13  ONLINE
>>>>             c5t3d14  ONLINE
>>>>           raidz1-17  ONLINE
>>>>             c5t3d15  ONLINE
>>>>             c5t3d16  ONLINE
>>>>             c5t3d17  ONLINE
>>>>             c5t3d18  ONLINE
>>>>             c5t3d19  ONLINE
>>>>           raidz1-18  ONLINE
>>>>             c5t3d20  ONLINE
>>>>             c5t3d21  ONLINE
>>>>             c5t3d22  ONLINE
>>>>             c5t3d23  ONLINE
>>>>             c5t3d24  ONLINE
>>>>           raidz1-19  ONLINE
>>>>             c5t3d25  ONLINE
>>>>             c5t3d26  ONLINE
>>>>             c5t3d27  ONLINE
>>>>             c5t3d28  ONLINE
>>>>             c5t3d29  ONLINE
>>>>           raidz1-20  ONLINE
>>>>             c5t3d30  UNAVAIL  corrupted data
>>>>             c5t3d31  UNAVAIL  corrupted data
>>>>             c5t3d32  UNAVAIL  corrupted data
>>>>             c5t3d33  UNAVAIL  corrupted data
>>>>             c5t1d35  UNAVAIL  corrupted data
>>>> 
>>>> root at host:~# zpool import -FfX pool1
>>>> cannot import 'pool1': one or more devices is currently unavailable
>>>> 
>>>> root at host:~# zpool import -f  pool1
>>>> cannot import 'pool1': I/O error
>>>>        Destroy and re-create the pool from
>>>>        a backup source.
>>>> 
>>>> 
>>>> 
>>>> On Wed, Feb 6, 2013 at 5:58 PM, Edward Ned Harvey (openindiana) <
>>>> openindiana at nedharvey.com> wrote:
>>>> 
>>>>>> From: Ram Chander [mailto:ramquick at gmail.com]
>>>>>> 
>>>>>> I had a  zpool thats exported on another system and when i try to
>>> import,
>>>>>> it fails. Any idea how to recover ?
>>>>> Start by proving there isn't some other problem.  Import the pool again
>>> on
>>>>> the same system that did the export.  Assuming you can successfully
>>> import,
>>>>> capture a "zpool status" and then export again and get back to your new
>>>>> system...
>>>>> 
>>>>> Show us the zpool status for the pool while it's functional in the old
>>>>> system.
>>>>> 
>>>>> Your error message said missing device.  ("one or more devices currently
>>>>> unavailable").  Make sure you "devfsadm -Cv" on the new system, and make
>>>>> sure the new disks are all appearing.
>>>>> 
>>>>> Your pool isn't based on partitions or slices, is it?  If so, you'll
>>> have
>>>>> to specify the devices manually.  (I think it's zpool import -d)
>>>>> 
>>>>> What type of disk controllers do you have in the new & old systems?
>>>  Many
>>>>> HBA's will occupy some space for their config & meta data on the drives,
>>>>> transparently to the OS.  This makes the drives incompatible with other
>>>>> systems, other than similar compatible HBA's.
>>>>> 
>>>>> Ideally, you'll have simple braindead SATA/SAS controllers on both the
>>>>> source and destination machines.  Because they won't add any custom
>>> data to
>>>>> the drives; they just present the drive to the OS, simple as that.
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> OpenIndiana-discuss mailing list
>>>>> OpenIndiana-discuss at openindiana.org
>>>>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>>> _______________________________________________
>>>> OpenIndiana-discuss mailing list
>>>> OpenIndiana-discuss at openindiana.org
>>>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>> 
>>> _______________________________________________
>>> OpenIndiana-discuss mailing list
>>> OpenIndiana-discuss at openindiana.org
>>> http://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> 
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the OpenIndiana-discuss mailing list