[OpenIndiana-discuss] CIFS and openindiana

Jim Klimov jimklimov at cos.ru
Sun Jul 14 14:39:21 UTC 2013


On 2013-07-14 11:16, Christopher Chan wrote:
> On Friday, July 12, 2013 09:27 PM, Jim Klimov wrote:
>> On a side note, listing is indeed probably faster - for a single
>> dataset. If you need to iterate (i.e. delete old zfs-auto-snap's
>> in a tree) then "zfs list" is still easier to use for me. And the
>> removal of snapshots is also AFAIK only doable by "zfs destroy"
>> locally on the storage box...
>
> Er...nothing beats "for i in `ls .zfs/snapshot/range`; do zfs delete $i
> ; done"
>
> listing takes seconds. zfs list takes minutes. I have almost a dozen
> datasets each with over 1.5k snapshots.

When this works - sure. Well, the script above would be
changed to use realistic dataset identifiers, but still ;)
You'd also want to use "ls -d" to not occasionally go
inside directories (the snapshots).

What I meant is a hierarchy - when there are datasets
inside each other, this would go ugly like:

for i in `ls -d .zfs/snapshot/$range */.zfs/snapshot/$range
     */*/.zfs/snapshot/$range`; do zfs delete $i; done

and you'd have to be quite a bit more creative about
extracting dataset names from this; perhaps, a sed pipe
filter like this to throw off tails would be nice, i.e.
"| sed 's/\/.zfs\/.*\/\([^\/]*\)$/@/" (needs verification).

Though you'd also need to prepend parent dataset paths -
and all that is relatively easy if your mounts are also
really hierarchical, with nothing relocated (like pieces
of the /export/* namespace coming from all sorts of places)
and nothing you want cleared up is hidden (and not mounted).

But yes, for simple cases you can certainly use ls.
And then when there are less snapshots to churn through,
make a "control shot" with "zfs list " ;)

By the way, I found that deletions go a lot faster if I
background the commands:
    ...; do zfs delete $i & done; sync; wait; sync
I believe many metadata updates fit into one TXG then.
Though for some reason these do not always return the shell
only after all deletions are completed - sometimes sooner
(maybe has to do with that thread on signalling limitations?)

It may be a problem on RAM-constrained boxes to fire
thousands of zfs processes for this, though. Perhaps some
command like "parallel" or a makefile could help to limit
the number of children; I didn't delve into that.

HTH,
//Jim



More information about the OpenIndiana-discuss mailing list