[OpenIndiana-discuss] spool replace says the disk has a different sector alignment

Timothy Coalson tsc5yc at mst.edu
Tue Nov 19 23:31:25 UTC 2013


I would hope that attaching one new drive wouldn't make the controller
suddenly decide all other disks also had 4k sectors - he said he was
getting "larger block size" for all existing disks in the pool.

Perhaps it would help to get an answer from the disks about their reported
block sizes: hook one of each model in question up to a pure HBA (or, if
applicable, a native SATA port in AHCI mode) and query the physical/logical
block sizes.  This may help for doing so on solaris:

http://solaris.kuehnke.de/archives/18-Checking-physical-sector-size-of-disks-on-Solaris.html

If it turns out that they had 4k physical blocks all along, it may help
performance to rebuild the pool.  I'm unsure whether you can use sd.conf to
make a 4k disk pretend to be a 512 disk (I have only used it to do the
opposite), but if rebuilding the pool isn't an option, or you just want to
get it back to full redundancy before trying additional things, that might
be something to try.

Tim



On Tue, Nov 19, 2013 at 4:48 PM, Reginald Beardsley <pulaskite at yahoo.com>wrote:

> It's probably the new drive, NOT the controller.  I ran into this when I
> RMAed a 3 TB HGST drive. The replacement had 4k sectors instead of the 512
> byte sectors of the failed drive.  HGST was unable to supply a 512 sector
> disk.
>
> Unfortunately, as far as I know you'll have to rebuild the entire pool.
>  I'd suggest buying new 4k sector disks for the new pool and keeping the
> old ones as spares for pools built with 512 sector disks.
>
> You may need to add entries in sd.conf to notify the driver that the
> drives are 4k.  That seems to depend upon what the drives report to the
> driver.  Plan on this being tedious and time consuming.  You might get
> lucky, but it's best to be prepared.  I suggest being especially vigilant
> about backups until you've got it resolved.  Make sure you don't overwrite
> good ones with bad ones.
>
> I *think* I wrote up everything I learned on the page Klimov created, but
> don't hesitate to ask if you have questions.  If you have to do the sd.conf
> bit make sure you read George Wilson's page linked from Klimov's page.
>
> Good luck,
> Reg
>
> --------------------------------------------
> On Tue, 11/19/13, Francis Swasey <Frank.Swasey at uvm.edu> wrote:
>
>  Subject: Re: [OpenIndiana-discuss] spool replace says the disk has a
> different sector alignment
>  To: "Discussion list for OpenIndiana" <
> openindiana-discuss at openindiana.org>
>  Date: Tuesday, November 19, 2013, 1:46 PM
>
>
>  On Nov 19, 2013, at 2:30 PM, Francis Swasey <Frank.Swasey at uvm.edu>
>  wrote:
>
>  > On Nov 19, 2013, at 1:50 PM, Stefan Müller-Wilken
>  <stefan.mueller-wilken at acando.de>
>  wrote:
>  >
>  >> Hi there,
>  >>
>  >> have you looked at
> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks
>  ? Or quoting
> http://thr3ads.net/zfs-discuss/2012/09/2101915-cannot-replace-X-with-Y-devices-have-different-sector-alignment
>  as  another source, you could also try fdisk to compare
>  your two devices:
>  >
>  > Yes, I have looked at the first one.  and the
>  fdisk -G shows me:
>  >
>  > root at bujbod1:~/blocksize# fdisk -G /dev/rdsk/c5t0d0
>  > * Physical geometry for device /dev/rdsk/c5t0d0
>  > * PCYL     NCYL
>     ACYL     BCYL
>     NHEAD NSECT SECSIZ
>  >  60788    60788    0
>        0
>  255   504   512
>  > root at bujbod1:~/blocksize# fdisk -G /dev/rdsk/c5t1d0
>  > * Physical geometry for device /dev/rdsk/c5t1d0
>  > * PCYL     NCYL
>     ACYL     BCYL
>     NHEAD NSECT SECSIZ
>  >  60788    60788    0
>        0
>  255   504   512
>  >
>  > And it is the c5t0d0 that it can't replace.
>  >
>  > What I have discovered since my first email is I have
>  all these messages in /var/adm/messages for all the disks in
>  this zpool:
>  >
>  > Nov 19 14:24:33 bujbod1 zfs: [ID 447730 kern.warning]
>  WARNING: Disk, '/dev/dsk/c5t10d0s0', has a block alignment
>  that is larger than the pool's alignment
>  >
>  > I'm guessing that the firmware upgrade I applied to
>  this system a month ago changed something critical to ZFS on
>  the IBM M5120 (rebranded LSI) raid controller.
>
>
>  Reading further into
> http://blog.delphix.com/gwilson/2012/11/15/4k-sectors-and-zfs/,
>  I get to the zdb command...
>
>  root at bujbod1:~# zdb -l /dev/dsk/c5t0d0s0 | grep ashift
>          ashift: 12
>          ashift: 12
>          ashift: 12
>          ashift: 12
>  root at bujbod1:~# zdb -l /dev/dsk/c5t1d0s0 | grep ashift
>          ashift: 9
>          ashift: 9
>          ashift: 9
>          ashift: 9
>
>  And here is the issue... Whatever got changed in the M5120's
>  firmware now makes zfs want to use ashift=12 instead of
>  ashift=9 (which it did when the pool was created).
>
>  Suggestions for how I get myself out of this mess?
>
>  Thanks,
>    Frank
>  _______________________________________________
>  OpenIndiana-discuss mailing list
>  OpenIndiana-discuss at openindiana.org
>  http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>


More information about the OpenIndiana-discuss mailing list