[OpenIndiana-discuss] zfs list when change after deletions

Grüninger, Andreas (LGL Extern) Andreas.Grueninger at lgl.bwl.de
Mon Apr 6 17:20:46 UTC 2015


NAME             USED  AVAIL  REFER  MOUNTPOINT
pool1/nfs/nfs1  6.51T  13.5T  5.79T  /datastores/nfs1_p1

And what is the value for REFER?
If you have snapshots or clones USED is higher than REFER.
See example above where 6.51 - 5.79 TB = 735G ares occupied by snapshots.

Check with 
zfs list -t all
if you have snaphots or clones.

With zfs get all <your zfs filesystem> you get detailed information like
...
pool1/nfs/nfs1  used                        6.51T                                          -
pool1/nfs/nfs1  usedbychildren              0                                              -
pool1/nfs/nfs1  usedbydataset               5.79T                                          -
pool1/nfs/nfs1  usedbyrefreservation        0                                              -
pool1/nfs/nfs1  usedbysnapshots             735G                                           -
....

-----Ursprüngliche Nachricht-----
Von: Harry Putnam [mailto:reader at newsguy.com] 
Gesendet: Montag, 6. April 2015 17:53
An: openindiana-discuss at openindiana.org
Betreff: [OpenIndiana-discuss] zfs list when change after deletions

Running 151_9

I got a notification that a zpool was nearing the full mark.

zfs `list -r p0' showed 825 gb in use and 89 gb left.

After checking it out I deleted about 260gb from one of zfs fs on `p0'.

I have yet to see any change in what zfs list -r `p0' shows.  still showing 825gb in use and 89gb left.

Further the zfs fs I deleted the 260gb from still shows 519gb for that fs as it did to start with.

A `du' of the actual dataset shows only 259gb.

I realize `du' is not the tool to use with zfs for the most part but still expecting to see some change in `zfs list -r' output.

So, is there a period of time that it normally takes for reduction in a zfs fs to be reflected in `zfs list -r'?

-------       -------       ---=---       -------       -------

/bin/df -h reflects the 260gb reduction in that zfs fs but still shows only 89gb available.

I'm thinking I should be seeing something like 260 + 89 = 349gb available

I thought maybe I needed to scrub that disc and commenced a scrub which will be running for a while.

Can some of you experienced hands explain what I am seeing?



_______________________________________________
openindiana-discuss mailing list
openindiana-discuss at openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the openindiana-discuss mailing list