[OpenIndiana-discuss] Disk Free Question

DormitionSkete@hotmail.com dormitionskete at hotmail.com
Mon Jul 7 17:49:20 UTC 2014


I’m real confused.

I’m investigating why our main server died last week.  When I look at some of the old logs of our nightly admin routines, I found that I might have had a disk free problem.  A “df -h” command on the server that died gave me this output on June 25th:

*************************************************
Disk usage:
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/OpenIndiana-151a7-2013-1228B   226G   4.3G    20G    18%    /
/devices                 0K     0K     0K     0%    /devices
/dev                     0K     0K     0K     0%    /dev
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   670M   432K   669M     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1    24G   4.3G    20G    18%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   669M   220K   669M     1%    /tmp
swap                   669M   100K   669M     1%    /var/run
rpool/export           226G    32K    20G     1%    /export
rpool/export/home      226G    32K    20G     1%    /export/home
rpool/export/home/myadmin   226G    71G    20G    79%    /export/home/myadmin
rpool                  226G    48K    20G     1%    /rpool
rpool/zones            226G    41K    20G     1%    /zones
rpool/zones/archive    226G    34K    20G     1%    /zones/archive
rpool/zones/dovecot    226G    33K    20G     1%    /zones/dovecot
rpool/zones/mysql      226G    33K    20G     1%    /zones/mysql
rpool/zones/routerb2   226G    33K    20G     1%    /zones/routerb2
rpool/zones/stamps     226G    33K    20G     1%    /zones/stamps
rpool/zones/tomcat     226G    33K    20G     1%    /zones/tomcat
rpool/zones/webphp4    226G    33K    20G     1%    /zones/webphp4
rpool/zones/zone1      226G    31K    20G     1%    /zones/zone1
rpool/zones/stamps/ROOT/zbe-3   226G   420M    20G     3%    /zones/stamps/root
/export/home/myadmin    91G    71G    20G    79%    /home/myadmin
rpool/zones/tomcat/ROOT/zbe-3   226G   782M    20G     4%    /zones/tomcat/root
rpool/zones/mysql/ROOT/zbe-3   226G   768M    20G     4%    /zones/mysql/root
rpool/zones/routerb2/ROOT/zbe-3   226G   8.5G    20G    30%    /zones/routerb2/root
rpool/zones/webphp4/ROOT/zbe-4   226G   3.3G    20G    15%    /zones/webphp4/root
rpool/zones/archive/ROOT/zbe-2   226G    28G    20G    59%    /zones/archive/root
rpool/zones/dovecot/ROOT/zbe-3   226G   570M    20G     3%    /zones/dovecot/root

Wed Jun 25 05:08:13 MDT 2014
*************************************************

Note the 79% use on /export/home/myadmin.

I moved everything over to a different server, and now a “df -h” command gives me this:

*************************************************

Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/OpenIndiana-151a7-2014-0706A
                       134G    26G   482M    99%    /
/devices                 0K     0K     0K     0%    /devices
/dev                     0K     0K     0K     0%    /dev
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   7.4G   392K   7.4G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
                        27G    26G   482M    99%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                   7.4G    24K   7.4G     1%    /tmp
swap                   7.4G    92K   7.4G     1%    /var/run
rpool/export           134G    32K   482M     1%    /export
rpool/export/home      134G    32K   482M     1%    /export/home
rpool/export/home/myadmin
                       134G    23G   482M    99%    /export/home/myadmin
rpool                  134G    46K   482M     1%    /rpool
rpool/zones            134G    37K   482M     1%    /zones
rpool/zones/archive    134G    34K   482M     1%    /zones/archive
rpool/zones/mysql      134G    33K   482M     1%    /zones/mysql
rpool/zones/routerb2   134G    33K   482M     1%    /zones/routerb2
rpool/zones/tomcat     134G    33K   482M     1%    /zones/tomcat
rpool/zones/webphp4    134G    33K   482M     1%    /zones/webphp4
rpool/zones/routerb2/ROOT/zbe-2
                       134G   8.7G   482M    95%    /zones/routerb2/root
rpool/zones/tomcat/ROOT/zbe-2
                       134G   827M   482M    64%    /zones/tomcat/root
rpool/zones/archive/ROOT/zbe-2
                       134G    28G   482M    99%    /zones/archive/root
rpool/zones/webphp4/ROOT/zbe-2
                       134G   3.3G   482M    88%    /zones/webphp4/root
rpool/zones/mysql/ROOT/zbe-2
                       134G   814M   482M    63%    /zones/mysql/root
/export/home/myadmin    24G    23G   482M    99%    /home/myadmin

*************************************************


Big problem:  99% use on /export/home/myadmin.


Now I don’t have disk quotas set up, as far as I know, unless it got set up by default.  I certainly didn’t do anything to set it up.

I looked in the “Open Solaris Bible” about disk quotas, and learned to do this command:

— 

myadmin at tryphon.ds:~# quota -v myadmin
Disk quotas for myadmin (uid 101):
Filesystem     usage  quota  limit    timeleft  files  quota  limit    timeleft
myadmin at tryphon.ds:~# 

— 


So, is there a built in quota for each user?  And if so, how can I turn that off?

Or how can I make myadmin be able to use as much space is on the drive?

It does not make sense to me what is going on.

I would very much appreciate it if somebody would give me a hand with this.







More information about the openindiana-discuss mailing list