[OpenIndiana-discuss] OpenIndiana-discuss Digest, Vol 33, Issue 36

Justin Warwick justin.warwick at gmail.com
Sat Apr 20 18:02:43 UTC 2013


Thanks very much for your replies, gentlemen. zdb -l indeed shows the disk
to be in a pool "rpool" with c2d0, which led me to realize I had made
mistake: I was looking at the wrong disk.

Some background information: initial configuration was two pools: "rpool"
and "media", each a simple two disk mirror. I added in two more disks, and
while doing so, it is possible that some of the original four were
reconnected to different SATA ports on the motherboard, though it is my
understanding the ZFS does not care so much about disk device paths,
because it uses GUIDs stored in the disk labels instead (which is why I
wasn't more careful at the time). Am I mistaken on that?

My confusion partly stems from the disk device paths having changed; here
is output from before I physically added in the new disks:

jrw at valinor:~$ zpool status
  pool: media
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2d1    ONLINE       0     0     0
            c3d1    ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 9.50K in 0h4m with 0 errors on Mon Jun 18 12:07:30
2012
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2d0s0  ONLINE       0     0     0
            c3d0s0  ONLINE       0     0     0

errors: No known data errors

I am afraid that I stepped away from the problem for a while, without good
notes, so some points are fuzzy (and apparently my bash HISTSIZE is the
default 500), but I am pretty sure I just did a zpool detach, intending to
reorganize or maybe to address some perceived weird behavior (which was in
fact ZFS trying to deal with my mistakes). In any case it does seem to fit
your described scenario, Jim.

Thanks for pointing out the SATA/IDE problem. I was wondering about that.
Most of my experience is plain old SUN-brand SPARC boxes so the absence of
a slice number on some of the disks was a mystery. I have no other fancy
requirements like dualboot. These are all identical, new Seagate Momentus
XT drives. Maybe there is BIOS setting to change.

  -- Justin


>
> ------------------------------
>
> Message: 5
> Date: Fri, 19 Apr 2013 09:30:07 -0400
> From: George Wilson <george.wilson at delphix.com>
> To: openindiana-discuss at openindiana.org
> Subject: Re: [OpenIndiana-discuss] zpool vdev membership misreported
> Message-ID: <517146DF.7040405 at delphix.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Justin,
>
> More than likely c3d1 contain a label from an old pool. If you run 'zdb
> -l /dev/dsk/c3d1s0' you should be able to if one exists. If does contain
> information for an old pool then using the '-f' option when attaching
> will solve your problem and relabel the device with the new pool
> information.
>
> You could also do a 'zpool import' to see if c3d1 shows up with
> information about an exported pool.
>
> Thanks,
> George
>
> On 4/19/13 5:20 AM, Justin Warwick wrote:
> > jrw at valinor:~$ pfexec zpool attach media c2d1 c3d1
> > invalid vdev specification
> > use '-f' to override the following errors:
> > /dev/dsk/c3d1s0 is part of active ZFS pool rpool. Please see zpool(1M).
> >
> > Yet I do not see c3d1 in zpool status.  Initially I had 4 disks total,
> all
> > Seagate SATA drives, two separate plain two-disk mirrors. I added a
> couple
> > more disks, but havn't yet added them to a pool. I noticed that the
> device
> > identifiers changed (which I did not expect). I broke the mirror on the
> > "media" pool (i can't remember whey I did that). Since that time I have
> > been getting that message which seems to mean somehow the disk is halfway
> > stuck in the mirror. Should I just issue the -f override? Am I asking for
> > trouble if i do?
> >
> >
> > jrw at valinor:~$ zpool status
> >    pool: media
> >   state: ONLINE
> >    scan: scrub repaired 0 in 3h38m with 0 errors on Sun Feb 17 04:44:29
> 2013
> > config:
> >
> >          NAME        STATE     READ WRITE CKSUM
> >          media       ONLINE       0     0     0
> >            c2d1      ONLINE       0     0     0
> >
> > errors: No known data errors
> >
> >    pool: rpool
> >   state: ONLINE
> >    scan: scrub in progress since Fri Apr 19 01:47:19 2013
> >      27.1M scanned out of 75.9G at 3.87M/s, 5h34m to go
> >      0 repaired, 0.03% done
> > config:
> >
> >          NAME        STATE     READ WRITE CKSUM
> >          rpool       ONLINE       0     0     0
> >            mirror-0  ONLINE       0     0     0
> >              c2d0s0  ONLINE       0     0     0
> >              c3d0s0  ONLINE       0     0     0
> >
> > errors: No known data errors
> > jrw at valinor:~$ echo | pfexec format
> > Searching for disks...done
> >
> >
> > AVAILABLE DISK SELECTIONS:
> >         0. c2d0 <Unknown-Unknown-0001 cyl 60797 alt 2 hd 255 sec 63>
> >            /pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0
> >         1. c2d1 <ST950056-         5YX1J9E-0001-465.76GB>
> >            /pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0
> >         2. c3d0 <ST950056-         5YX1R0Z-0001-465.76GB>
> >            /pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0
> >         3. c3d1 <Unknown-Unknown-0001 cyl 60797 alt 2 hd 255 sec 63>
> >            /pci at 0,0/pci-ide at 11/ide at 1/cmdk at 1,0
> >         4. c7d0 <ST950056-         5YX1FJF-0001-465.76GB>  Media3
> >            /pci at 0,0/pci-ide at 14,1/ide at 0/cmdk at 0,0
> >         5. c7d1 <ST950056-         5YX1Q39-0001-465.76GB>  Media4
> >            /pci at 0,0/pci-ide at 14,1/ide at 0/cmdk at 1,0
> > Specify disk (enter its number): Specify disk (enter its number):
> >
> >   -- Justin
> > _______________________________________________
> > OpenIndiana-discuss mailing list
> > OpenIndiana-discuss at openindiana.org
> > http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
>
>
> ------------------------------
>
> Message: 6
> Date: Fri, 19 Apr 2013 16:45:57 +0200
> From: Jim Klimov <jim at cos.ru>
> To: openindiana-discuss at openindiana.org
> Subject: Re: [OpenIndiana-discuss] zpool vdev membership misreported
> Message-ID: <517158A5.5070003 at cos.ru>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 2013-04-19 11:20, Justin Warwick wrote:
> > I broke the mirror on the
> > "media" pool (i can't remember whey I did that). Since that time I have
> > been getting that message which seems to mean somehow the disk is halfway
> > stuck in the mirror.
> Agreeing with George's answer, I'd like to inquire: how did you
> break the mirror? The state you describe seems like physical
> removal of one disk (c3d1) and then detachment of an absent
> component from the remaining "live" mirror, turning that into
> a single-disk pool, and then physical addition of the old disk
> back into the system. Due to name and/or guid conflict, this
> disk does not get imported as a pool. Maybe due to having the
> same GUID as a live pool, it is considered part of one -
> mistakenly in fact.
>
> BTW, interesting corner case for our recent discussion of the
> possible enhancements to detection and import of rpool (i.e.
> for removable media, or to switch HDDs between SATA and IDE
> legacy modes easily).
>
> Speaking of which, your cXdY disks on "pci-ide" bus seem to be
> in IDE legacy mode. Is this intentional (or are they really IDE)?
> You know this is sub-optimal for SATA, right?.. though use of
> this mode may be dictated by other causes (compatibilities with
> dual-booted OSes, missing SATA drivers in illumos, etc.)
>
> Cheers,
> //Jim
>
>
>
>
> ------------------------------
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
> End of OpenIndiana-discuss Digest, Vol 33, Issue 36
> ***************************************************
>


More information about the OpenIndiana-discuss mailing list