[OpenIndiana-discuss] safely cleanup pkg cache?
Stephan Althaus
Stephan.Althaus at Duedinghausen.eu
Sun Feb 28 21:18:09 UTC 2021
On 02/26/21 09:07 PM, Andreas Wacknitz wrote:
> Am 23.02.21 um 08:00 schrieb Stephan Althaus:
>> On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote:
>>> In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?,
>>> Andreas...:
>>>
>>>> Am 21.02.21 um 22:42 schrieb Stephan Althaus:
>>>>> Hello!
>>>>>
>>>>> The "-s" option does the minimal obvious remove of the corresponding
>>>>> snapshot:
>>>
>>> My experience seems to match what Andreas and Toomas are saying: -s
>>> isn't
>>> doing what it's supposed to be doing (?).
>>>
>>> After using
>>>
>>> sudo beadm destroy -F -s -v <bename>
>>>
>>> to destroy a dozen or so boot environments, I'm down to just this
>>> for boot environments:
>>>
>>> $ beadm list
>>> BE Active Mountpoint Space Policy
>>> Created
>>> openindiana - - 12.05M static
>>> 2019-05-17 10:37
>>> openindiana-2021:02:07 - - 27.27M static
>>> 2021-02-07 01:01
>>> openindiana-2021:02:07-backup-1 - - 117K static
>>> 2021-02-07 13:06
>>> openindiana-2021:02:07-backup-2 - - 117K static
>>> 2021-02-07 13:08
>>> openindiana-2021:02:07-1 NR / 51.90G static
>>> 2021-02-07 17:23
>>> openindiana-2021:02:07-1-backup-1 - - 186K static
>>> 2021-02-07 17:48
>>> openindiana-2021:02:07-1-backup-2 - - 665K static
>>> 2021-02-07 17:58
>>> openindiana-2021:02:07-1-backup-3 - - 666K static
>>> 2021-02-07 18:02
>>>
>>>
>>> However, zfs list still shows (I think) snapshots for some of the
>>> intermediate boot environments that I destroyed:
>>>
>>> $ zfs list -t snapshot
>>> NAME USED
>>> AVAIL REFER MOUNTPOINT
>>> rpool/ROOT/openindiana-2021:02:07-1 at install 559M - 5.94G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-05-17-18:34:55 472M -
>>> 6.28G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-05-17-18:46:32 555K -
>>> 6.28G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-05-17-18:48:56 2.18M
>>> - 6.45G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-06-13-22:13:18 1015M
>>> - 9.74G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-06-21-16:25:04 1.21G
>>> - 9.85G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-08-23-16:17:28 833M -
>>> 9.74G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-08-28-21:51:55 1.40G
>>> - 10.8G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-09-12-23:35:08 643M -
>>> 11.7G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-10-02-22:55:57 660M -
>>> 12.0G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-11-09-00:04:17 736M -
>>> 12.4G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-12-05-01:02:10 1.02G
>>> - 12.7G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2019-12-20-19:55:51 788M -
>>> 12.9G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2020-02-13-23:17:35 918M -
>>> 13.3G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-01-21-02:27:31 1.74G
>>> - 13.9G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-06-22:47:15 1.71G
>>> - 18.8G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-07-06:59:02 1.22G
>>> - 19.1G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-07-19:06:07 280M -
>>> 19.3G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-07-19:08:29 280M -
>>> 19.3G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-07-23:21:52 640K -
>>> 19.1G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-07-23:23:46 868K -
>>> 19.2G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-07-23:48:07 294M -
>>> 19.3G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-07-23:58:44 280M -
>>> 19.3G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-08-00:02:17 280M -
>>> 19.3G -
>>> rpool/ROOT/openindiana-2021:02:07-1 at 2021-02-21-06:24:56 3.49M
>>> - 19.4G -
>>>
>>> Now I have to figure out how to map the zfs snapshots to the boot
>>> environments that I kept, so that I can "weed out" the zfs snapshots
>>> that I don't need.
>>>
>>> I appreciate all the discussion and info my question has spawned! I
>>> didn't anticipate the issue being as complicated as it appears it is.
>>>
>>> Tim
>>
>> Hello!
>>
>> "beadm -s " destroys snapshots.
>>
>> "rpool/ROOT/openindiana-2021:02:07-1" is the filesystem of the
>> current BE.
>>
>> i don't know why these snapshots are in there,
>> but these are left there from the "pkg upgrade" somehow.
>>
>> I don't think that "beadm -s" is to blame here.
>>
>> Maybe an additional Parameter would be nice to get rid of old
>> snaphots within the BE-filesystem(s).
>>
>> Greetings,
>>
>> Stephan
>>
>>
>> _______________________________________________
>> openindiana-discuss mailing list
>> openindiana-discuss at openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> Hi,
>
> I think I hit the bug again, even when using beadm destroy -s
>
> ╰─➤ zfs list -t snapshot
> NAME USED
> AVAIL REFER MOUNTPOINT
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-22-16:33:39 489M - 26.5G -
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-24-12:32:24 472M - 26.5G
> - <- only one snapshop here from Feb. 24th
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-25-13:03:15 0 - 26.5G -
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-25-13:03:50 0 - 26.5G -
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-26-08:35:10 0 - 26.5G -
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-26-08:35:57 0 - 26.5G -
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-22-16:33:39 682M
> - 1.99G -
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-24-12:32:24 653M
> - 1.99G -
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-25-13:03:15 632K
> - 2.00G -
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-25-13:03:50 130M
> - 2.12G -
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-26-08:35:10 691K
> - 2.07G -
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-26-08:35:57 178M
> - 2.25G -
> ╭─andreas at skoll ~
> ╰─➤ pfexec zfs destroy
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-22-16:33:39
> ╭─andreas at skoll ~
> ╰─➤ pfexec zfs destroy
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-22-16:33:39
> ╭─andreas at skoll ~ <- Two older snapshots removed
> ╰─➤ beadm list
> BE Active Mountpoint Space Policy Created
> openindiana-2021:02:24 - - 23.70M static 2021-02-24 13:33
> openindiana-2021:02:25 - - 14.08M static 2021-02-25 14:03
> openindiana-2021:02:26 NR / 32.54G static 2021-02-26
> 09:35 <- Three
> BE's, let's remove the oldest
> ╭─andreas at skoll ~
> ╰─➤ pfexec beadm destroy -s openindiana-2021:02:24
> <- See, used with -s!
> Are you sure you want to destroy openindiana-2021:02:24?
> This action cannot be undone (y/[n]): y
> Destroyed successfully
> ╭─andreas at skoll ~
> ╰─➤ beadm list
> BE Active Mountpoint Space Policy Created
> openindiana-2021:02:25 - - 14.08M static 2021-02-25
> 14:03 <- BE
> removed
> openindiana-2021:02:26 NR / 32.41G static 2021-02-26 09:35
> ╭─andreas at skoll ~
> ╰─➤ beadm list -a
> BE/Dataset/Snapshot Active Mountpoint Space Policy Created
> openindiana-2021:02:25
> rpool1/ROOT/openindiana-2021:02:25 -
> - 14.08M static 2021-02-25 14:03
> openindiana-2021:02:26
> rpool1/ROOT/openindiana-2021:02:26 NR
> / 32.41G static 2021-02-26 09:35
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-24-12:32:24 -
> - 685.24M static 2021-02-24 13:32 <- This snapshot
> also survived the beadm destroy -s command
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-25-13:03:15 -
> - 654.72M static 2021-02-25 14:03
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-26-08:35:10 -
> - 691K static 2021-02-26 09:35
> rpool1/ROOT/openindiana-2021:02:26/var at 2021-02-26-08:35:57 -
> - 177.52M static 2021-02-26 09:35
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-24-12:32:24 -
> - 502.54M static 2021-02-24 13:32 <- Snapshot
> still there
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-25-13:03:15 -
> - 479.87M static 2021-02-25 14:03
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-26-08:35:10 -
> - 0 static 2021-02-26 09:35
> rpool1/ROOT/openindiana-2021:02:26 at 2021-02-26-08:35:57 -
> - 0 static 2021-02-26 09:35
>
> Andreas
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
Hi,
now i think we are (or better: i am ) confusing snapshots with
filesystems in this case..
Reading the following command outputs, i interpret that there is always
a filesystem corresponding to a BE,
maybe the snapshots in the ZFS of the current BE have nothing to do with
older BEs.
$ beadm list
BE Active Mountpoint Space Policy Created
openindiana-2020:11:26 - - 40.50M static 2020-11-26 13:52
openindiana-2020:11:26-backup-1 - - 263K static 2020-12-11 22:27
openindiana-2020:12:29 - - 34.60M static 2020-12-29 22:07
openindiana-2021:01:13 - - 34.68M static 2021-01-13 21:57
openindiana-2021:02:18 - - 409.54M static 2021-02-18 22:31
openindiana-2021:02:18-backup-1 - - 42.21M static 2021-02-19 13:35
openindiana-2021:02:20 - - 42.67M static 2021-02-20 20:52
openindiana-2021:02:20-1 NR / 168.06G static 2021-02-20 21:22
steven at dell6510:~$ zfs list -t all -r rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 207G 4.34G 33K /rpool
rpool/ROOT 169G 4.34G 23K legacy
rpool/ROOT/openindiana-2020:11:26 40.5M 4.34G 37.7G /
rpool/ROOT/openindiana-2020:11:26-backup-1 263K 4.34G 37.2G /
rpool/ROOT/openindiana-2020:12:29 34.6M 4.34G 38.4G /
rpool/ROOT/openindiana-2021:01:13 34.7M 4.34G 41.9G /
rpool/ROOT/openindiana-2021:02:18 410M 4.34G 41.9G /
rpool/ROOT/openindiana-2021:02:18-backup-1 42.2M 4.34G 42.2G /
rpool/ROOT/openindiana-2021:02:20 42.7M 4.34G 42.6G /
rpool/ROOT/openindiana-2021:02:20-1 168G 4.34G 42.7G /
Now th check if beadmdestroy -s works:
# zfs snapshot rpool/ROOT/openindiana-2020:11:26 at test
$ zfs list -t all -r rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 207G 4.34G 33K /rpool
rpool/ROOT 169G 4.34G 23K legacy
rpool/ROOT/openindiana-2020:11:26 40.5M 4.34G 37.7G /
rpool/ROOT/openindiana-2020:11:26 at test 0 - 37.7G -
rpool/ROOT/openindiana-2020:11:26-backup-1 263K 4.34G 37.2G /
<snip>
# beadm destroy -s openindiana-2020:11:26
Are you sure you want to destroy openindiana-2020:11:26?
This action cannot be undone (y/[n]): y
Destroyed successfully
$ zfs list -t all -r rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 207G 4.38G 34K /rpool
rpool/ROOT 169G 4.38G 23K legacy
rpool/ROOT/openindiana-2020:11:26-backup-1 263K 4.38G 37.2G /
rpool/ROOT/openindiana-2020:12:29 34.6M 4.38G 38.4G /
<snip>
This is what i personally expect to happen with "beadm destroy -s <bename>".
But maybe i am confusing things, as i am relatively new to this all..
Greetings,
Stephan
More information about the openindiana-discuss
mailing list