[OpenIndiana-discuss] KVM Failed to allocate memory: Resource temporarily unavailable
Jonathan Adams
t12nslookup at gmail.com
Thu Dec 10 09:24:09 UTC 2015
how much swap do you have?
KVM automatically allocates the same amount of swap as memory when you
start an instance, if you don't have the free swap it won't work.
Jon
On 10 December 2015 at 04:04, Fucai Liang (BLCT) <fcliang at baolict.com>
wrote:
>
>
> Hello, guys:
>
> I has a server running oi_151.1.8, the server have 32G memory .
>
>
> root at oi01:~# prtconf | grep Memory
> Memory size: 32760 Megabytes
> root at oi01:~# echo "::memstat" | mdb -k
> Page Summary Pages MB %Tot
> ------------ ---------------- ---------------- ----
> Kernel 451736 1764 5%
> ZFS File Data 78429 306 1%
> Anon 19849 77 0%
> Exec and libs 586 2 0%
> Page cache 2170 8 0%
> Free (cachelist) 10432 40 0%
> Free (freelist) 7821076 30551 93%
>
> Total 8384278 32751
> Physical 8384277 32751
> root at oi01:~#
>
>
> I lauch a KVM VM by
>
>
> #!/bin/sh
>
> qemu-kvm \
> -vnc 0.0.0.0:2 \
> -cpu host \
> -smp 2 \
> -m 8192 \
> -no-hpet \
> -localtime \
> -drive file=/dev/zvol/rdsk/rpool/svrinit,if=virtio,index=0 \
> -net nic,vlan=0,name=e1000g0,model=e1000,macaddr=2:8:20:bd:ae:01 \
> -net
> vnic,vlan=0,name=e1000g0,ifname=vnic01,macaddr=2:8:20:bd:ae:01 \
> -vga std \
> -daemonize
>
>
> root at oi01:~# echo "::memstat" | mdb -k
> Page Summary Pages MB %Tot
> ------------ ---------------- ---------------- ----
> Kernel 461061 1801 5%
> ZFS File Data 78446 306 1%
> Anon 2131265 8325 25%
> Exec and libs 918 3 0%
> Page cache 2255 8 0%
> Free (cachelist) 10015 39 0%
> Free (freelist) 5700318 22266 68%
>
> Total 8384278 32751
> Physical 8384277 32751
> root at oi01:~#
>
>
> then I launch second KVM VM,
>
> #!/bin/sh
>
> qemu-kvm \
> -enable-kvm \
> -vnc 0.0.0.0:3 \
> -cpu host \
> -smp 2 \
> -m 8192 \
> -no-hpet \
> -localtime \
> -drive file=/dev/zvol/rdsk/rpool/svr03,if=virtio,index=0 \
> -net nic,vlan=0,name=e1000g0,model=e1000,macaddr=2:8:20:bd:ae:03 \
> -net
> vnic,vlan=0,name=e1000g0,ifname=vnic03,macaddr=2:8:20:bd:ae:03 \
> -vga std \
> -daemonize
>
>
>
> I got err message :
> Failed to allocate memory: Resource temporarily unavailable
>
> -bash: fork: Not enough space
>
> .
>
> the same ietmes happen on OmniOS , the following is a OmniOS-discuss mail
> .
>
> That caused by availrmem not enough for memory locked .
>
> Is that mean I can only use about 14G memory for KVM VM in a 32G memory
> installed server?
> OS can only locked about 14G memory .
>
> ——————FOLLOWING--------------
>
> Hello, guys:
>
> I has a server running OmniOS v11 r151016. the server have 32G memory .
> I star tow kvm virtual machines by running the following commands:
>
> qemu-system-x86_64 -enable-kvm -vnc 0.0.0.0:12 -cpu host -smp 4 -m 8192
> -no-hpe
>
>
> qemu-system-x86_64 -enable-kvm -vnc 0.0.0.0:11 -cpu host -smp 2 -m 4096
> -no-hpe
>
> one use 8G memory and the other one use 4G memory.
>
> now the memory usage of the system as following:
>
> root at BLCC01:/root# prtconf | grep Memory
> Memory size: 32760 Megabytes
> root at BLCC01:/root# echo "::memstat" | mdb -k
> Page Summary Pages MB %Tot
> ------------ ---------------- ---------------- ----
> Kernel 549618 2146 7%
> ZFS File Data 668992 2613 8%
> Anon 3198732 12495 38%
> Exec and libs 1411 5 0%
> Page cache 4402 17 0%
> Free (cachelist) 10578 41 0%
> Free (freelist) 3950545 15431 47%
>
> Total 8384278 32751
> Physical 8384277 32751
> root at BLCC01:/root# swap -sh
> total: 12G allocated + 35M reserved = 12G used, 6.8G available
> root at BLCC01:/root# swap -l
> swapfile dev swaplo blocks free
> /dev/zvol/dsk/rpool/swap 263,2 8 8388600 8388600
> root at BLCC01:/root#
>
>
> root at BLCC01:/root# prctl $$
>
> project.max-locked-memory
> usage 12.0GB
> system 16.0EB max deny
> -
> project.max-port-ids
> privileged 8.19K - deny
> -
> system 65.5K max deny
> -
> project.max-shm-memory
> privileged 8.00GB - deny
> -
> system 16.0EB max deny
> -
>
>
>
>
>
>
> #prstat -J
>
> PROJID NPROC SWAP RSS MEMORY TIME CPU PROJECT
> 1 5 12G 12G 38% 1:07:23 5.6% user.root
> 0 43 72M 76M 0.2% 0:00:59 0.0% system
> 3 5 4392K 14M 0.0% 0:00:00 0.0% default
>
>
>
> then I start the third vm (4G memory), it got the following error :
>
>
> qemu-system-x86_64 -enable-kvm -vnc 0.0.0.0:2 -cpu host -smp 2 -m 4096
> -no-hpet
>
> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still trying…
>
>
> I got 15G free memory in the system, why qemu-system-x86_64 can not locked
> enough memory ?
>
> Thanks for your help !
>
> sorry for my poor english !
>
>
>
>
> -----------------------------------
> fcliang
>
>
>
> Thank for your help!
>
> when the server boot up, it has 7989066 pages availrmem. after I launch
> one VM (8Gmemory), availrmem decrease to 4756624 .
>
>
>
> 7989066-4756624 = 3232442
>
> 3232442/256 = 12626.7265625 / 1024 = 12.3G
>
>
>
> root at BLCC01:/root# mdb -ke 'availrmem/D ; pages_pp_maximum/D'
> availrmem:
> availrmem: 7989066
> pages_pp_maximum:
> pages_pp_maximum: 325044
>
> root at BLCC01:/root# qemu-system-x86_64 -enable-kvm -vnc 0.0.0.0:12 -cpu
> host -smp 4 -m 8192 -no-hpe
>
>
>
> root at BLCC01:/root# mdb -ke 'availrmem/D ; pages_pp_maximum/D'
> availrmem:
> availrmem: 4756624
> pages_pp_maximum:
> pages_pp_maximum: 325044
> root at BLCC01:/root#
>
>
> That mean the VM use 12.3G availrmem , how it happens ?
>
> Thank !
>
>
>
>
> ------------------------------
> fcliang
>
>
>
>
> On Dec 2, 2015, at 1:37, Joshua M. Clulow <josh at sysmgr.org> wrote:
>
> > On 1 December 2015 at 09:11, Dan McDonald <danmcd at omniti.com> wrote:
> >>> On Dec 1, 2015, at 12:03 PM, Fucai.Liang <fcliang at baolict.com> wrote:
> >>> then I start the third vm (4G memory), it got the following error :
> >>> qemu-system-x86_64 -enable-kvm -vnc 0.0.0.0:2 -cpu host -smp 2 -m
> 4096 -no-hpet
> >>>
> >>> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> >>> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> >>> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> >>> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> >>> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying...
> >>> qemu_mlock: have only locked 1940582400 of 4294967296 bytes; still
> trying…
> >>>
> >>> I got 15G free memory in the system, why qemu-system-x86_64 can not
> locked enough memory ?
> >> What does "vmstat 1 5" say prior to your launch of the third VM?
> >
> > I suspect it will show you have free memory available, but that what
> > is really happening is getting here:
> >
> >
> https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/vm/seg_vn.c#L7989-L8002
> >
> > This is likely failing in page_pp_lock() because "availrmem" has
> > fallen below "pages_pp_maximum":
> >
> >
> https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/vm/vm_page.c#L3817-L3818
> >
> > We set this value here, though it can be overridden in "/etc/system":
> >
> >
> https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/vm/vm_page.c#L423-L436
> >
> > You can look at the current values with mdb:
> >
> > mdb -ke 'availrmem/D ; pages_pp_maximum/D'
> >
> > Increasing this value doesn't seem to be without risk: I believe that
> > it can lead to memory exhaustion deadlocks, amongst other things. I
> > don't know if it's expected to be tuneable without a reboot.
> >
> >
> > Cheers.
> >
> > --
> > Joshua M. Clulow
> > UNIX Admin/Developer
> > http://blog.sysmgr.org
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
>
>
> ------------------------------
> Fucai Liang
>
>
>
>
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
More information about the openindiana-discuss
mailing list