[OpenIndiana-discuss] rpool defragmentation

Jim Klimov jimklimov at cos.ru
Tue Jan 20 15:33:56 UTC 2015


On 20 January 2015 15:18:02 CET, Andrew Gabriel <illumos at cucumber.demon.co.uk> wrote:
>Your question about fragmentation brings up the same question as before
>
>- what are you trying to defragment?
>The FRAG column in zpool status relates to fragmentation of free space.
>
>If you take a disk which is 80% full and replace it with a disk which
>is 
>twice the size, the pool's freespace will become 6 times bigger with 
>newly allocated unfragmented free space, so the FRAG figure will drop
>to 
>1/6th of whatever it was before, without you needing to do anything
>else.
>
>You would only copy the filesystem to defragment the layout of files
>(or 
>more strictly blocks), and I suspect that will only have become an
>issue 
>if you have been writing to the filesystem for some time with a highly 
>fragmented spacemap. In many cases, the majority of the files in rpool 
>would have been written during installation when the spacemap would not
>
>have been fragmented, and those files have not been modified so will
>not 
>themselves be fragmented. Any files which have become fragmented are 
>those you write to, and in many cases these will defragment when you 
>next write them with the spacemap defragmented. The only case where
>this 
>won't happen and might matter would be for files written when the 
>spacemap was badly fragmented which are not modifed again, but are
>often 
>read (but not often enough to stay in the ARC). In most rpool cases, it
>
>doesn't sound to me like this case is likely to be worth worrying
>about.
>
>I have copied a boot environment for another reason (to force copies=2 
>on a boot environment of a single-disk system).
>
>BEs are normally clones of course, so only changed blocks are newly
>laid 
>down. In my case, I want them all laid down again with the copies=2
>rule 
>inplace. I did this by first creating a new BE with beadm create. Then
>I 
>used zfs destroy to  blow away the new clone, and used zfs send/recv to
>
>create a new filesystem of the same name that the clone had been. 
>Slightly to my surprise, beadm was perfectly happy with the result, and
>
>could activate and boot from the new (non-cloned) filesystem just fine.
>
>Nikolam wrote:
>> As I understand, one can first migrate to bigger drives for rpool
>> (being tight on rpool is not very healthy, anyway)
>> and then do zfs send of BE on disk themselves, or copying to new BE.
>>
>> Where I am not sure if zfs send also does defragmentation (I suppose
>> it does since it see file system layout) and where copying files
>> surely would do defragmentation.
>>
>> It is also a question does really all data on rpool is system-related
>> and could it be migrated elsewhere or there are many snapshots that
>> use space.
>
>
>
>_______________________________________________
>openindiana-discuss mailing list
>openindiana-discuss at openindiana.org
>http://openindiana.org/mailman/listinfo/openindiana-discuss

You might be in for even more surprise: any dataset directly under rpool/ROOT is considered a potential bootable rootfs. If GRUB finds the menu-requested kernel and module (i.e. miniroot archive) filenames relative to this filesystem, it can boot whatever OS exists there. I had SXCE and OI coexist on one system, and OmnOS and Firefly on another. Dataset and rpool attributes are not generally required for bootability, they are rather clues for things like matching a version of a rootfs to 'contemporary' local zone roots.

HTH,
Jim

--
Typos courtesy of K-9 Mail on my Samsung Android



More information about the openindiana-discuss mailing list