[OpenIndiana-discuss] migrating zfs pools
Jim Klimov
jimklimov at cos.ru
Thu Jul 18 18:07:15 UTC 2013
On 2013-07-18 19:50, Gary Gendel wrote:
> Hi,
>
> I have a pool, archive, with zfs files:
>
> /archive
> /archive/gary
> /archive/dani
> /archive/ian
> <and so on...>
>
> I want to replace this with a new set of disks along with the
> appropriate properties set (smb sharing, etc.).
>
> Basically, I want to copy the complete pool to a new pool and then
> import that pool as the original. The goal is to retire the original
> pool configuration. Sounds like something that many have done before so
> I wanted to tap on the experts who can provide direction and caveats.
This seems like a job for recursive ZFS send, which with replication
options should also transfer your datasets' attributes (such as share
definitions). One thing I'd suggest for the migration would be to
import the newly created pool with an "alternate mountpoint" (-R /a)
so that when your replicated filesystems begin mounting, they don't
conflict with originals (/archive -> /a/archive), but when you retire
your old pool and import the new without an altroot - paths to new
data would become the same as they were in the old setup.
While doing this, you might also want to use some different data
allocation policies (copies, compression, dedup if your huge RAM
permits, etc.) From my practice, this is best done by assigning the
policies to original datasets, then snapshotting and sending them.
You can also change attributes on destinations during the "zfs recv"
but this may be or not be convenient (i.e. different policies would
be applied to blocks written before you make the switch, though
this doesn't break anything from readers' perspective).
Possibly, it might make sense to send all historical data highly
compressed (i.e. gzip-9) and then re-set your new datasets to the
compression algos you'd need for performance, if applicable (i.e.
to lz4, lzjb, zle, off and so on). Note that I am still vague on
whether *reading* gzip-9 or lz4 yields any speed benefits to
either side (i.e. is decompression speed CPU-bound, and how much
for the two winning options).
Since you're speaking of retiring the pool configuration, I shouldn't
assume that you have something that can be simple expanded onto new
disks in the ways of mirroring (mirrors, raid10 - attach new disks
for increased redundancy, wait for resilver, detach old disks, expand
pool)? Eh, here, for completeness I've said it :)
Also, do you plan to retire old disks or reuse them later in the
new pool? At some risk to data integrity, you can create a pool
with missing devices (i.e. using a lofs device over a sparse file
for a component, then destroying it) - this would make your new
pool degraded, until you are done migrating data and place the old
disks into those "missing" positions. I wouldn't risk this with
single-redundancy setups, but for raidz2/raidz3 this trick might
make sense - NO WARRANTIES though :)
HTH,
//Jim Klimov
More information about the OpenIndiana-discuss
mailing list