[OpenIndiana-discuss] Create zone from older snapshot (not rollback)
Jim Klimov
jimklimov at cos.ru
Thu Oct 4 18:47:40 UTC 2012
2012-09-12 18:26, Mark Creamer пишет:
> I need to use an existing snapshot of a zone from a few weeks ago, and use
> it to create a new zone so I can boot it and get to the file system. I
> can't rollback the existing "production" zone so I have to use this older
> snapshot to create a new one, I think. Am I thinking this through
> correctly? I know how to clone existing zones, but not sure how to do it
> from a snapshot.
> Thanks
Hello, I'm not sure I ever saw this answered, uncool. I do hope
you found the right answer yourself, and were not delayed for a
month by lack of help from the list. Still, here goes for future
readers:
The zoneadm command has a "clone" option which can take named
snapshots for the base:
# zoneadm help
...
clone [-m method] [-s <ZFS snapshot>] [brand-specific args] zonename
Clone the installation of another zone. The -m option can
be used to specify 'copy' which forces a copy of the source
zone. The -s option can be used to specify the name of a
ZFS snapshot that was taken from a previous clone command.
The snapshot will be used as the source instead of creating
a new ZFS snapshot. All other arguments are passed to the
brand clone function; see brands(5) for more information.
...
The (probably unsupported) way I'd do this manually, if need be
to work around some non-standard situations or real bugs, would
be to just create ZFS clones based on those snapshots of the
filesystem hierarchy that makes up a zone. Back in SXCE/Sol10
there was usually one simple filesystem dataset per zone, now
there's three or more, e.g.:
rpool/zones/build/zone1 583M 25.1G 34K
/zones/build/zone1
rpool/zones/build/zone1/ROOT 583M 25.1G 31K legacy
rpool/zones/build/zone1/ROOT/zbe 285M 25.1G 285M legacy
rpool/zones/build/zone1/ROOT/zbe-1 70.5K 25.1G 285M legacy
rpool/zones/build/zone1/ROOT/zbe-2 298M 25.1G 298M legacy
The current ZBE dataset gets mounted as /zones/build/zone1/root
mountpoint by "zoneadm -z zone1 ready" or "... boot". There are
also other attributes to look out for (set manually from some
other working zone's example), including:
# zfs get all rpool/zones/build/zone1 | grep local
rpool/zones/build/zone1 sharenfs off local
rpool/zones/build/zone1 sharesmb off local
# zfs get all rpool/zones/build/zone1/ROOT | grep local
rpool/zones/build/zone1/ROOT mountpoint legacy local
rpool/zones/build/zone1/ROOT zoned on local
# zfs get all rpool/zones/build/zone1/ROOT/zbe | grep local
rpool/zones/build/zone1/ROOT/zbe canmount noauto
local
rpool/zones/build/zone1/ROOT/zbe org.opensolaris.libbe:active on
local
rpool/zones/build/zone1/ROOT/zbe org.opensolaris.libbe:parentbe
717f5aeb-1222-6381-f3d3-cc52c9336f6e local
# zfs get zoned,mountpoint rpool/zones/build/zone1/ROOT/zbe
NAME PROPERTY VALUE SOURCE
rpool/zones/build/zone1/ROOT/zbe zoned on inherited from
rpool/zones/build/zone1/ROOT
rpool/zones/build/zone1/ROOT/zbe mountpoint legacy inherited from
rpool/zones/build/zone1/ROOT
When you're done spawning new clones of those old snapshots,
you can go into global zone's /etc/zones/ directory and copy
the existing zone's "oldzonename.xml" file into "newzone.xml"
and appropriately edit this file (paths, zone name, shared-IP
config if needed, delegated VNICs and other resources, etc.)
For exclusive-IP zones you change the appropriate networking
setup data files in the newly cloned zone root, as well as
its /etc/nodename, /etc/motd self-description and so on.
Finally, add the entry for the new zone into /etc/zones/index
marking it as "installed" (perhaps clone the existing zone's
line and change zone name and paths, and make a unique GUID).
The clone should be a defined and bootable zone at this point.
Monitor its boot at its console to search for unexpected errors,
address conflicts and stuff (zoneadm -z new boot; zlogin -C new).
Hope this helps,
//Jim Klimov
More information about the OpenIndiana-discuss
mailing list