[OpenIndiana-discuss] SMR disks
Nick Tan
nick.tan at gmail.com
Fri May 1 12:09:00 UTC 2015
On Friday, May 1, 2015, Andrew Gabriel <illumos at cucumber.demon.co.uk> wrote:
> On 01/05/2015 07:13, Nick Tan wrote:
>
>> Hi all,
>>
>> Has anyone tried using SMR disks with ZFS? I bought a Seagate 8TB SMR
>> disk
>> and put it in a esata enclosure for my backups. I found that zfs send
>> would cause the disk to go offline. My guess is that zfs send is too fast
>> and fills the drive write cache.
>>
>> I tried again with just rsync and this worked fine.
>>
>
> How did you setup the drive?
> What filesystem, or just writing to it serially like a tape drive?
>
> SMR disks have some interesting issues with recording, particularly when
> writing non-serially as most filesystems normally do. Since there's no SMR
> support in Illumos, I presume you ran the drive in Drive Managed mode -
> this makes it look like a standard random access drive. However, like a
> flash drive, it will actually be laying the data on the drive out very
> differently from what the host OS/filesystem imagines. Also like a flash
> drive, it will have to do some housekeeping and move blocks of data around
> on the disk and/or re-record large ranges of data previously written, so
> performance from the host system may appear very mixed, including some i/o
> requests which take long enough that with a standard magnetic drive you
> would assume the drive is dying (probably why you saw the drive reported as
> going offline). This may be fine for archival/backup data (providing the
> host system knows to allow a long time for i/o), but is less likely to be
> good for normal filesystem use by applications.
>
> There are better ways of driving SMR drives, but they require support in
> the operating system and/or application. One of the uses they are more
> suitable for is a key/value object store, because drives can implement the
> object store layer entirely in the drive firmware hiding the real layout
> from the system, and data is accessed entirely by using keys.
>
> --
> Andrew
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
I set it up as a single drive zpool. As it's just being used for backup I
figured it would be ok since the use case is large sequential writes. I
think though that zfs send just overwhelmes it.
I disconnect it and take it offsite so I'm not too worried about other I/O
to it. I'll reconnect it once a month for a new sync. However since zfs is
CoW it should be ok on subsequent rsyncs.
This system is replacing my lto-2 tape library and so far it's been a good
experience. It is certainly faster than tape!
More information about the openindiana-discuss
mailing list