[OpenIndiana-discuss] Disk Free Question

James Carlson carlsonj at workingcode.com
Mon Jul 7 19:29:51 UTC 2014


On 07/07/14 15:01, DormitionSkete at hotmail.com wrote:
> 
> On Jul 7, 2014, at 12:29 PM, James Carlson <carlsonj at workingcode.com> wrote:
>>
>> At a guess, you've got several snapshots that are tying up a lot of
>> valuable resources.
> 
> 
> Thanks, Mr. Carlson.  
> 
> And that is something that I’d been wondering about for quite some time — “Do snapshots take additional space?”
> 
> I think your answer is "yes”.
> 
> Am I right?

No.

A snapshot on its own doesn't take up extra space (well, a couple of KB
for pointers, but that's it).  However, snapshots are copy-on-write.
This means that if you take a snapshot, and then write to the volume
(note that removing files is itself "writing" to the directory), the
first write of any block covered by the snapshot will have to be
duplicated and then modified as requested.  That causes additional
allocations.

Yes, "rm" causes block allocations.

This means that over time, a snapshot will hold down -- freeze in place
-- ancient versions of the data that you may have long since forgotten
about.

So, you do have to be careful with them.  They're cheap in ordinary use,
but not at all cheap if taken on a busy file system and then completely
forgotten about.

That's why there's a difference between the "used" and the "referenced"
space.  The "referenced" space is just the amount that you can address
directly, but "used" includes everything, including bits tied up in
snapshots.

Try "man zfs" and read the "Native Properties" section starting with "used":

     used

         The amount of space consumed by this dataset and all its
         descendents.  This  is the value that is checked against
         this dataset's quota and  reservation.

> And also, “Do boot environments take up additional space?”

"Maybe."

In general, if you just create new boot environments, you'll find that
they're all just referring to snapshots, so, as above, they're cheap and
take little room.

But if you then modify them -- say, by doing an upgrade to an inactive
BE -- then you'll be taking up real room for the new data.

You'll have to delete things you don't want to keep using "zfs destroy".

> myadmin at tryphon.ds:~# beadm list
> BE                           Active Mountpoint Space Policy Created
> OpenIndiana-151a7-2014-0705A -      -          10.9M static 2014-07-05 17:13
> OpenIndiana-151a7-2014-0706A NR     /          26.8G static 2014-07-06 22:05
> openindiana                  -      -          12.0M static 2014-07-04 17:39
> myadmin at tryphon.ds:~# 
> 
> 
> I did some work last night, and made a new boot environment.  Would deleting that old BE from the night before, which I don’t expect to ever need again, clear up 10.9M?

Yes.  Not much.

You could do "zfs list -t snapshot" if you want all the gory details.

> And finally, on a somewhat unrelated, and yet somewhat related, offshoot of this:
> 
> 
> I fired up the “dead” server.  BTW, I ran Dell diagnostics on it, and it passed everything.  So it does not appear to be a hardware issue.
> 
> It looked like I ran out of disk space on it.  I could not figure out how to get into the /export/home/myadmin directory to delete some files.  That is undoubtedly where the problem is.
> 
> I was able to delete some zones that I was not using, which freed up some space — enough that I thought it should be able to boot — but it still will not boot.  I’m guessing it may be because rpool/export/home is still full.

I've never seen a full disk cause a failure to boot ... but I guess
anything could be possible.  I'd expect that the real problem is corruption.

> I’m also guessing that I have to mount that rpool somehow to be able to do that.
> 
> If I’m right, would you please lend a hand and tell me how I should be able to do that?
> 
> I’d really appreciate it.
> 
> I’ll try to make it up to you somehow, too!

You should be able to boot up to an administrative state and get a root
prompt there.  Then do "zfs list -t snapshot" to figure out what's
taking up the space and "zfs destroy ..." to remove the unwanted bits.

The one thing that definitely won't help is "rm," which is what I
suspect might have been done here.

-- 
James Carlson         42.703N 71.076W         <carlsonj at workingcode.com>



More information about the openindiana-discuss mailing list