[OpenIndiana-discuss] Appalling disk performance for KVM
Russell Hansen
russhan at new-swankton.net
Wed Jan 4 04:50:44 UTC 2012
I'm seeing absolutely terrible disk performance for my virtual machines running under KVM.
My ZFS pool is 6 Western Digital AV-25 500GB disks arranged in 3 mirrored vdevs connected to an LSI 9210 (reflashed IBM M1015) in IT mode.
My KVM startup script:
/usr/bin/qemu-kvm \
-enable-kvm \
-smp 2 \
-m 1024 \
-no-hpet \
-rtc base=localtime,driftfix=slew \
-drive file=/dev/zvol/dsk/oi_data/kvm/system/disk0,if=virtio,format=raw,cache=none,index=0 \
-drive file=/dev/zvol/dsk/oi_data/kvm/system/disk1,if=virtio,format=raw,cache=none,index=1 \
-net nic,vlan=0,name=lan0,model=virtio,macaddr=$MAC0 \
-net vnic,vlan=0,name=lan0,ifname=$ETH0 \
-net nic,vlan=1,name=int0,model=virtio,macaddr=$MAC1 \
-net vnic,vlan=1,name=int0,ifname=$ETH1 \
-vnc $LAN_ADDR:$1 \
-monitor telnet:127.0.0.1:444$1,server,nowait \
-usbdevice tablet \
-daemonize
My zvol settings:
NAME PROPERTY VALUE SOURCE
oi_data/kvm/sachiel/disk1 type volume -
oi_data/kvm/sachiel/disk1 creation Tue Nov 8 11:56 2011 -
oi_data/kvm/sachiel/disk1 used 298G -
oi_data/kvm/sachiel/disk1 available 844G -
oi_data/kvm/sachiel/disk1 referenced 139G -
oi_data/kvm/sachiel/disk1 compressratio 1.00x -
oi_data/kvm/sachiel/disk1 reservation none default
oi_data/kvm/sachiel/disk1 volsize 144G local
oi_data/kvm/sachiel/disk1 volblocksize 8K -
oi_data/kvm/sachiel/disk1 checksum on default
oi_data/kvm/sachiel/disk1 compression off default
oi_data/kvm/sachiel/disk1 readonly off default
oi_data/kvm/sachiel/disk1 copies 1 default
oi_data/kvm/sachiel/disk1 refreservation 149G local
oi_data/kvm/sachiel/disk1 primarycache all default
oi_data/kvm/sachiel/disk1 secondarycache all default
oi_data/kvm/sachiel/disk1 usedbysnapshots 11.8G -
oi_data/kvm/sachiel/disk1 usedbydataset 139G -
oi_data/kvm/sachiel/disk1 usedbychildren 0 -
oi_data/kvm/sachiel/disk1 usedbyrefreservation 147G -
oi_data/kvm/sachiel/disk1 logbias latency default
oi_data/kvm/sachiel/disk1 dedup off default
oi_data/kvm/sachiel/disk1 mlslabel none default
oi_data/kvm/sachiel/disk1 sync standard default
oi_data/kvm/sachiel/disk1 refcompressratio 1.00x -
Representative iostat output while moving some 4GB of digital photos (approx 2MB each):
capacity operations bandwidth
pool alloc free read write read write
------------------------- ----- ----- ----- ----- ----- -----
oi_data 373G 1019G 71 377 569K 3.98M
mirror 124G 340G 15 127 125K 1.34M
c2t50014EE2AFBDC3CBd0 - - 2 71 118K 1.34M
c2t50014EE25A683C10d0 - - 0 69 7.01K 1.34M
mirror 124G 340G 31 123 256K 1.33M
c2t50014EE20518D4F6d0 - - 4 49 132K 1.33M
c2t50014EE2AFBD87B4d0 - - 4 48 123K 1.33M
mirror 124G 340G 23 125 189K 1.32M
c2t50014EE205154E0Bd0 - - 3 52 35.1K 1.32M
c2t50014EE25A6829D6d0 - - 4 51 156K 1.32M
------------------------- ----- ----- ----- ----- ----- -----
A Bonnie result from a napp-it install:
Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
2011.12.12 32G 61586 99 151550 16 95819 17 49188 99 273833 18 1166 3
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 32070 99 +++++ +++ +++++ +++ 32736 100 +++++ +++ +++++ +++
2011.12.12,32G,61586,99,151550,16,95819,17,49188,99,273833,18,1166.1,3,16,32070,99,+++++,+++,+++++,+++,32736,100,+++++,+++,+++++,+++
As you can see, it basically hovers around 4-6 MB/s average with an approximate 10KB I/O size so it appears I'm 100% iops limited. Do I need to recreate my zvols with a larger volblocksize to get decent performance or is it something else I need to do on the KVM config side or within the virtual machine?
I have one Windows Server 2003 (32-bit) VM and 2x Windows Server 2008 R2 (64-bit) VMs and they all behave the same way when it comes to disk perf.
Thanks,
-Russ
More information about the OpenIndiana-discuss
mailing list