[OpenIndiana-discuss] OpenIndiana CPU Usage.
Bruno Damour
llama at ruomad.net
Fri Nov 12 07:10:21 UTC 2010
Le 12/11/10 08:04, Nicholas Metsovon a écrit :
> Thank you for the reply.
>
> Yes, this persists over time. It always has at least a 19% load. And I have
> not even set up Tomcat or Apache, or anything yet.
>
>
> Does any of this tell you anything?
>
> I take it a 20% persistent load is not normal, then? It certainly isn't in
> Linux.
>
>
> This is prstat
>
> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
> 688 root 48M 35M sleep 59 0 0:00:05 0.2% Xorg/1
> 5 root 0K 0K sleep 99 -20 0:00:02 0.1% zpool-rpool/150
> 761 gdm 92M 25M sleep 59 0 0:00:02 0.1% gdm-simple-gree/1
> 11 root 14M 13M sleep 59 0 0:00:16 0.1% svc.configd/19
> 754 gdm 104M 35M sleep 59 0 0:00:03 0.1% gnome-settings-/1
> 774 dsadmin 9900K 3772K cpu4 59 0 0:00:00 0.0% prstat/1
> 765 dsadmin 13M 5456K sleep 59 0 0:00:00 0.0% sshd/1
> 760 gdm 87M 18M sleep 59 0 0:00:00 0.0% gnome-power-man/1
> 612 root 24M 12M sleep 59 0 0:00:01 0.0% fmd/27
> 9 root 19M 12M sleep 59 0 0:00:06 0.0% svc.startd/15
> 525 root 11M 3904K sleep 59 0 0:00:00 0.0% nscd/28
> 751 gdm 12M 6028K sleep 59 0 0:00:00 0.0% gconfd-2/1
> 768 dsadmin 9060K 2744K sleep 59 0 0:00:00 0.0% bash/1
> 753 gdm 7832K 5988K sleep 59 0 0:00:00 0.0% at-spi-registry/1
> 738 gdm 20M 10M sleep 59 0 0:00:00 0.0% gnome-session/2
> 759 gdm 80M 13M sleep 59 0 0:00:00 0.0% metacity/1
> 356 root 7000K 5636K sleep 59 0 0:00:01 0.0% hald/4
> 737 gdm 3212K 1912K sleep 59 0 0:00:00 0.0% dbus-daemon/1
> 264 root 9828K 3412K sleep 59 0 0:00:00 0.0% devfsadm/6
> 602 root 10M 3760K sleep 59 0 0:00:00 0.0% inetd/4
> 144 root 8328K 1780K sleep 59 0 0:00:00 0.0% dhcpagent/1
> 134 daemon 13M 5248K sleep 59 0 0:00:00 0.0% kcfd/3
> 93 root 11M 5028K sleep 59 0 0:00:00 0.0% nwamd/10
> 675 root 15M 6716K sleep 59 0 0:00:00 0.0% rad/4
> 646 root 9708K 1796K sleep 59 0 0:00:00 0.0% sshd/1
> 679 root 12M 5944K sleep 59 0 0:00:00 0.0% intrd/1
> 682 root 9908K 3460K sleep 59 0 0:00:00 0.0% gdm-binary/2
> 648 smmsp 6276K 1796K sleep 59 0 0:00:00 0.0% sendmail/1
> 493 root 7432K 1436K sleep 59 0 0:00:00 0.0% cron/1
> 545 daemon 3252K 1300K sleep 59 0 0:00:00 0.0% rpcbind/1
> 684 root 11M 4364K sleep 59 0 0:00:00 0.0% gdm-simple-slav/2
> 722 root 6208K 2148K sleep 59 0 0:00:00 0.0% sendmail/1
> 559 root 8164K 1588K sleep 59 0 0:00:00 0.0% automountd/2
> 561 root 8396K 1968K sleep 59 0 0:00:00 0.0% automountd/4
> 280 root 9396K 2812K sleep 59 0 0:00:00 0.0% rcm_daemon/4
> 610 root 2064K 1552K sleep 59 0 0:00:00 0.0% ttymon/1
> 257 root 2220K 1564K sleep 59 0 0:00:00 0.0% powerd/4
> 375 root 3948K 2344K sleep 59 0 0:00:00 0.0% hald-addon-netw/1
> 736 gdm 3568K 1348K sleep 59 0 0:00:00 0.0% dbus-launch/1
> 536 root 2892K 2080K sleep 59 0 0:00:00 0.0% hald-addon-stor/3
> 322 root 9336K 2972K sleep 59 0 0:00:00 0.0% picld/4
> 616 root 4224K 2268K sleep 59 0 0:00:00 0.0% rmvolmgr/1
> 450 root 4332K 3128K sleep 59 0 0:00:00 0.0% console-kit-dae/2
> 287 root 3392K 1988K sleep 59 0 0:00:00 0.0% dbus-daemon/1
> 297 root 2608K 1536K sleep 60 -20 0:00:00 0.0% zonestatd/5
> 355 root 16M 5792K sleep 59 0 0:00:00 0.0% cupsd/1
> 592 root 2212K 1340K sleep 59 0 0:00:00 0.0% sac/1
> 725 root 3060K 1096K sleep 59 0 0:00:00 0.0% in.ndpd/1
> 161 root 2572K 1668K sleep 59 0 0:00:00 0.0% pfexecd/3
> 206 root 11M 3164K sleep 59 0 0:00:00 0.0% syseventd/18
> 357 root 3840K 2296K sleep 59 0 0:00:00 0.0% hald-runner/1
> 598 root 2348K 1436K sleep 59 0 0:00:00 0.0% ttymon/1
> 47 netcfg 4780K 3692K sleep 59 0 0:00:00 0.0% netcfgd/5
> Total: 64 processes, 387 lwps, load averages: 0.98, 0.40, 0.15
>
>
>
>
> prstat -m
>
> PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP
> 761 gdm 0.3 0.0 0.0 0.0 0.0 0.0 99 0.2 8 1 40 0 gdm-simple-g/1
> 780 dsadmin 0.2 0.1 0.0 0.0 0.0 0.0 100 0.0 54 1 351 0 prstat/1
> 381 root 0.2 0.1 0.0 0.0 0.0 0.0 100 0.1 3 0 11 0 hald-addon-a/1
> 760 gdm 0.1 0.1 0.0 0.0 0.0 0.0 100 0.1 3 1 9 0 gnome-power-/1
> 688 root 0.2 0.0 0.0 0.0 0.0 0.0 100 0.0 9 0 82 0 Xorg/1
> 765 dsadmin 0.0 0.0 0.0 0.0 0.0 0.0 100 0.2 5 1 41 0 sshd/1
> 356 root 0.0 0.0 0.0 0.0 0.0 25 75 0.0 2 1 10 0 hald/4
> 264 root 0.0 0.0 0.0 0.0 0.0 50 50 0.0 3 0 6 0 devfsadm/6
> 525 root 0.0 0.0 0.0 0.0 0.0 3.6 96 0.0 54 0 319 0 nscd/28
> 722 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 1 0 10 0 sendmail/1
> 144 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.1 2 0 6 0 dhcpagent/1
> 675 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 rad/4
> 646 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 sshd/1
> 679 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 intrd/1
> 682 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 gdm-binary/2
> 751 gdm 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 gconfd-2/1
> 648 smmsp 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 sendmail/1
> 493 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 cron/1
> 545 daemon 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 rpcbind/1
> 684 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 gdm-simple-s/2
> 559 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 automountd/2
> 561 root 0.0 0.0 0.0 0.0 0.0 25 75 0.0 0 0 0 0 automountd/4
> 280 root 0.0 0.0 0.0 0.0 0.0 50 50 0.0 0 0 0 0 rcm_daemon/4
> 737 gdm 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 dbus-daemon/1
> 610 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 ttymon/1
> 257 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 powerd/4
> 375 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 hald-addon-n/1
> 602 root 0.0 0.0 0.0 0.0 0.0 25 75 0.0 0 0 0 0 inetd/4
> 736 gdm 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 dbus-launch/1
> 536 root 0.0 0.0 0.0 0.0 0.0 33 67 0.0 0 0 0 0 hald-addon-s/3
> 322 root 0.0 0.0 0.0 0.0 0.0 25 75 0.0 0 0 0 0 picld/4
> 616 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 rmvolmgr/1
> 759 gdm 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 metacity/1
> 612 root 0.0 0.0 0.0 0.0 0.0 65 35 0.0 0 0 0 0 fmd/27
> 450 root 0.0 0.0 0.0 0.0 0.0 50 50 0.0 0 0 0 0 console-kit-/2
> 287 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 dbus-daemon/1
> 297 root 0.0 0.0 0.0 0.0 0.0 20 80 0.0 0 0 0 0 zonestatd/5
> 355 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 cupsd/1
> 592 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 sac/1
> 725 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 in.ndpd/1
> 161 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 pfexecd/3
> 206 root 0.0 0.0 0.0 0.0 0.0 76 24 0.0 0 0 0 0 syseventd/18
> 357 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 hald-runner/1
> 598 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 ttymon/1
> 134 daemon 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 kcfd/3
> 47 netcfg 0.0 0.0 0.0 0.0 0.0 20 80 0.0 1 0 1 0 netcfgd/5
> 93 root 0.0 0.0 0.0 0.0 0.0 30 70 0.0 0 0 0 0 nwamd/10
> 45 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 dlmgmtd/6
> 599 root 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 utmpd/1
> 49 netadm 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 0 0 0 0 ipmgmtd/3
> 607 root 0.0 0.0 0.0 0.0 0.0 64 36 0.0 0 0 0 0 syslogd/11
> 11 root 0.0 0.0 0.0 0.0 0.0 11 89 0.0 0 0 0 0 svc.configd/19
> 9 root 0.0 0.0 0.0 0.0 0.0 33 67 0.0 0 0 0 0 svc.startd/15
> Total: 64 processes, 387 lwps, load averages: 0.77, 0.49, 0.21
>
>
> dtrace -n 'profile-10ms{@[stack()] = count()}'
>
> genunix`timeout_generic+0x41
> genunix`timeout+0x5b
> uhci`uhci_root_hub_allocate_intr_pipe_resource+0x97
> uhci`uhci_handle_root_hub_status_change+0x153
> genunix`callout_list_expire+0x77
> genunix`callout_expire+0x31
> genunix`callout_execute+0x1e
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 1
>
> unix`splr+0x92
> unix`lock_set_spl+0x1d
> genunix`disp_lock_enter+0x2e
> unix`disp+0xad
> unix`swtch+0xa4
> genunix`cv_timedwait_hires+0xe0
> genunix`cv_reltimedwait+0x4f
> stmf`stmf_svc+0x423
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 1
>
> unix`disp_anywork+0xc4
> unix`cpu_idle_mwait+0x7d
> unix`idle+0x114
> unix`thread_start+0x8
> 1
>
> unix`pg_ev_thread_swtch+0x124
> unix`swtch+0xdb
> genunix`cv_wait+0x61
> genunix`taskq_thread_wait+0x84
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 1
>
> unix`av_check_softint_pending+0x24
> unix`av_dispatch_softvect+0x48
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 1
>
> dtrace`dtrace_state_clean+0xc
> genunix`cyclic_softint+0xdc
> unix`cbe_low_level+0x17
> unix`av_dispatch_softvect+0x5f
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 1
>
> genunix`callout_list_expire+0xce
> genunix`callout_expire+0x31
> genunix`callout_execute+0x1e
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 1
>
> unix`mutex_enter
> genunix`cyclic_softint+0xdc
> unix`cbe_softclock+0x1a
> unix`av_dispatch_softvect+0x5f
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 1
>
> ata`ghd_timeout
> genunix`callout_expire+0x31
> genunix`callout_execute+0x1e
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 1
>
> unix`todpc_rtcget+0xa0
> unix`todpc_get+0x1c
> unix`tod_get+0x14
> genunix`clock+0x6a9
> genunix`cyclic_softint+0xdc
> unix`cbe_softclock+0x1a
> unix`av_dispatch_softvect+0x5f
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 1
>
> unix`i86_monitor+0x1
> unix`idle+0x114
> unix`thread_start+0x8
> 1
>
> dtrace`dtrace_state_clean+0x17
> genunix`cyclic_softint+0xdc
> unix`cbe_low_level+0x17
> unix`av_dispatch_softvect+0x5f
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 1
>
> genunix`timeout_generic+0x53
> genunix`timeout+0x5b
> ata`ghd_timeout+0xc8
> genunix`callout_list_expire+0x77
> genunix`callout_expire+0x31
> genunix`callout_execute+0x1e
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 1
>
> unix`disp_anywork+0xd7
> unix`idle+0x114
> unix`thread_start+0x8
> 1
>
> unix`mutex_enter+0x10
> genunix`taskq_thread_wait+0x84
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 2
>
> unix`mutex_enter+0x10
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 2
>
> genunix`fsflush_do_pages+0x124
> genunix`fsflush+0x39a
> unix`thread_start+0x8
> 2
>
> unix`mutex_exit
> unix`av_dispatch_softvect+0x5f
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 2
>
> unix`disp_getwork+0xa2
> unix`idle+0x9d
> unix`thread_start+0x8
> 2
>
> unix`disp_getwork+0xb9
> unix`idle+0x9d
> unix`thread_start+0x8
> 2
>
> unix`tsc_read+0xa
> genunix`gethrtime_unscaled+0xd
> unix`idle_enter+0x1b
> unix`idle+0xc9
> unix`thread_start+0x8
> 2
>
> unix`todpc_rtcget+0xda
> unix`todpc_get+0x1c
> unix`tod_get+0x14
> genunix`clock+0x6a9
> genunix`cyclic_softint+0xdc
> unix`cbe_softclock+0x1a
> unix`av_dispatch_softvect+0x5f
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 2
>
> unix`tsc_read+0xc
> genunix`gethrtime_unscaled+0xd
> unix`idle_enter+0x1b
> unix`idle+0xc9
> unix`thread_start+0x8
> 2
>
> unix`disp_anywork+0xd
> unix`cpu_idle_mwait+0x7d
> unix`idle+0x114
> unix`thread_start+0x8
> 2
>
> unix`do_splx+0x8d
> genunix`disp_lock_exit_nopreempt+0x43
> genunix`cv_wait+0x54
> genunix`taskq_thread_wait+0x84
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 2
>
> unix`cpu_idle_exit+0x116
> unix`cpu_idle_mwait+0xfb
> unix`idle+0x114
> unix`thread_start+0x8
> 2
>
> unix`disp_getwork+0xdf
> unix`idle+0x9d
> unix`thread_start+0x8
> 2
>
> genunix`taskq_thread_wait+0x87
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 2
>
> unix`cpu_idle_exit+0x33
> unix`cpu_idle_mwait+0xfb
> unix`idle+0x114
> unix`thread_start+0x8
> 2
>
> unix`bitset_atomic_del
> unix`idle+0x114
> unix`thread_start+0x8
> 2
>
> unix`atomic_and_64+0x4
> unix`av_dispatch_softvect+0x55
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 2
>
> unix`disp_getwork+0x12a
> unix`idle+0x9d
> unix`thread_start+0x8
> 2
>
> unix`disp_getwork+0x131
> unix`idle+0x9d
> unix`thread_start+0x8
> 2
>
> dtrace`dtrace_dynvar_clean+0xd9
> dtrace`dtrace_state_clean+0x23
> genunix`cyclic_softint+0xdc
> unix`cbe_low_level+0x17
> unix`av_dispatch_softvect+0x5f
> unix`dispatch_softint+0x34
> unix`switch_sp_and_call+0x13
> 2
>
> unix`_resume_from_idle+0xb
> 2
>
> unix`disp_anywork+0xc2
> unix`cpu_idle_mwait+0x7d
> unix`idle+0x114
> unix`thread_start+0x8
> 2
>
> unix`splr+0x92
> unix`disp+0x1dd
> unix`swtch+0xa4
> genunix`cv_wait+0x61
> genunix`taskq_thread_wait+0x84
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 2
>
> unix`atomic_add_32+0x3
> unix`pg_ev_thread_swtch+0xa3
> unix`swtch+0xdb
> unix`idle+0xc4
> unix`thread_start+0x8
> 2
>
> unix`cpu_idle_mwait+0x5c
> unix`idle+0x114
> unix`thread_start+0x8
> 2
>
> unix`idle+0x55
> unix`thread_start+0x8
> 2
>
> unix`scan_memory+0xc
> unix`thread_start+0x8
> 3
>
> unix`disp_getwork+0xb6
> unix`disp+0x1c2
> unix`swtch+0xa4
> genunix`cv_wait+0x61
> genunix`taskq_thread_wait+0x84
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 3
>
> unix`idle+0x80
> unix`thread_start+0x8
> 3
>
> unix`cpu_idle_mwait
> unix`thread_start+0x8
> 3
>
> unix`disp_anywork+0x9f
> unix`cpu_idle_mwait+0x7d
> unix`idle+0x114
> unix`thread_start+0x8
> 3
>
> 4
>
> unix`idle+0x84
> unix`thread_start+0x8
> 4
>
> unix`disp_getwork+0x142
> unix`idle+0x9d
> unix`thread_start+0x8
> 4
>
> unix`cpu_idle_enter+0x5f
> unix`cpu_idle_mwait+0xdc
> unix`idle+0x114
> unix`thread_start+0x8
> 4
>
> unix`i86_mwait+0xe
> unix`idle+0x114
> unix`thread_start+0x8
> 5
>
> unix`outw+0x8
> unix`cpu_acpi_write_port+0x1e
> unix`write_ctrl+0x32
> unix`speedstep_pstate_transition+0x52
> unix`speedstep_power+0x5f
> unix`cpupm_state_change+0x100
> unix`cpupm_plat_change_state+0x3f
> unix`cpupm_change_state+0x2a
> unix`cpupm_utilization_event+0x238
> unix`cmt_ev_thread_swtch_pwr+0x97
> unix`pg_ev_thread_swtch+0xd9
> unix`swtch+0xdb
> genunix`cv_timedwait_hires+0xe0
> genunix`cv_reltimedwait+0x4f
> stmf`stmf_svc+0x423
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 5
>
> unix`disp_anywork+0x52
> unix`cpu_idle_mwait+0x7d
> unix`idle+0x114
> unix`thread_start+0x8
> 6
>
> unix`bitset_atomic_add+0x39
> unix`cpu_idle_mwait+0x78
> unix`idle+0x114
> unix`thread_start+0x8
> 6
>
> unix`disp_anywork+0xb4
> unix`cpu_idle_mwait+0x7d
> unix`idle+0x114
> unix`thread_start+0x8
> 6
>
> unix`bitset_atomic_del+0x3d
> unix`cpu_idle_mwait+0x120
> unix`idle+0x114
> unix`thread_start+0x8
> 7
>
> unix`cpu_idle_mwait+0x3e
> unix`idle+0x114
> unix`thread_start+0x8
> 7
>
> unix`outw+0x8
> unix`cpu_acpi_write_port+0x1e
> unix`write_ctrl+0x32
> unix`speedstep_pstate_transition+0x52
> unix`speedstep_power+0x5f
> unix`cpupm_state_change+0x100
> unix`cpupm_plat_change_state+0x3f
> unix`cpupm_change_state+0x2a
> unix`cpupm_utilization_event+0x238
> unix`cmt_ev_thread_swtch_pwr+0xc0
> unix`pg_ev_thread_swtch+0xd9
> unix`swtch+0xdb
> unix`idle+0xc4
> unix`thread_start+0x8
> 7
>
> unix`cpu_idle_mwait+0x108
> unix`idle+0x114
> unix`thread_start+0x8
> 9
>
> unix`disp_getwork+0x139
> unix`idle+0x9d
> unix`thread_start+0x8
> 18
>
> unix`dispatch_softint+0x27
> unix`switch_sp_and_call+0x13
> 19
>
> unix`cpu_idle_mwait+0xc1
> unix`idle+0x114
> unix`thread_start+0x8
> 35
>
> unix`atomic_and_64+0x4
> unix`cpu_idle_mwait+0x120
> unix`idle+0x114
> unix`thread_start+0x8
> 40
>
> unix`atomic_or_64+0x4
> unix`cpu_idle_mwait+0x78
> unix`idle+0x114
> unix`thread_start+0x8
> 44
>
> unix`disp_getwork+0xb6
> unix`idle+0x9d
> unix`thread_start+0x8
> 53
>
> unix`i86_monitor+0x10
> unix`cpu_idle_mwait+0xbe
> unix`idle+0x114
> unix`thread_start+0x8
> 55
>
> unix`outw+0x8
> unix`cpu_acpi_write_port+0x1e
> unix`write_ctrl+0x32
> unix`speedstep_pstate_transition+0x52
> unix`speedstep_power+0x5f
> unix`cpupm_state_change+0x100
> unix`cpupm_plat_change_state+0x3f
> unix`cpupm_change_state+0x2a
> unix`cpupm_utilization_event+0x238
> unix`cmt_ev_thread_swtch_pwr+0x97
> unix`pg_ev_thread_swtch+0xd9
> unix`swtch+0xdb
> genunix`cv_wait+0x61
> genunix`taskq_thread_wait+0x84
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 55
>
> unix`do_splx+0x8d
> unix`xc_common+0x231
> unix`xc_call+0x46
> unix`speedstep_power+0xb3
> unix`cpupm_state_change+0x100
> unix`cpupm_plat_change_state+0x3f
> unix`cpupm_change_state+0x2a
> unix`cpupm_utilization_event+0x238
> unix`cmt_ev_thread_swtch_pwr+0xc0
> unix`pg_ev_thread_swtch+0xd9
> unix`swtch+0xdb
> unix`idle+0xc4
> unix`thread_start+0x8
> 60
>
> unix`cpu_idle_exit+0x1fc
> unix`cpu_idle_mwait+0xfb
> unix`idle+0x114
> unix`thread_start+0x8
> 78
>
> unix`do_splx+0x8d
> unix`xc_common+0x231
> unix`xc_call+0x46
> unix`speedstep_power+0xb3
> unix`cpupm_state_change+0x100
> unix`cpupm_plat_change_state+0x3f
> unix`cpupm_change_state+0x2a
> unix`cpupm_utilization_event+0x238
> unix`cmt_ev_thread_swtch_pwr+0x97
> unix`pg_ev_thread_swtch+0xd9
> unix`swtch+0xdb
> genunix`cv_timedwait_hires+0xe0
> genunix`cv_reltimedwait+0x4f
> stmf`stmf_svc+0x423
> genunix`taskq_thread+0x248
> unix`thread_start+0x8
> 90
>
> unix`do_splx+0x8d
> unix`xc_common+0x231
> unix`xc_call+0x46
> unix`speedstep_power+0xb3
> unix`cpupm_state_change+0x100
> unix`cpupm_plat_change_state+0x3f
> unix`cpupm_change_state+0x2a
> unix`cpupm_utilization_event+0x238
> unix`cmt_ev_thread_swtch_pwr+0x97
> unix`pg_ev_thread_swtch+0xd9
> unix`swtch+0xdb
> genunix`cv_wait+0x61
> genunix`taskq_thread_wait+0x84
> genunix`taskq_thread+0x2d1
> unix`thread_start+0x8
> 94
>
> unix`cpu_idle_enter+0x109
> unix`cpu_idle_mwait+0xdc
> unix`idle+0x114
> unix`thread_start+0x8
> 154
>
> unix`i86_mwait+0xd
> unix`cpu_idle_mwait+0xf1
> unix`idle+0x114
> unix`thread_start+0x8
> 2731
>
>
>
>
>
>
>
>
> ----- Original Message ----
> From: Michael Schuster<michaelsprivate at gmail.com>
> To: Discussion list for OpenIndiana<openindiana-discuss at openindiana.org>
> Sent: Thu, November 11, 2010 11:38:58 PM
> Subject: Re: [OpenIndiana-discuss] OpenIndiana CPU Usage.
>
> On Fri, Nov 12, 2010 at 06:52, Nicholas Metsovon<nmetsovo at yahoo.com> wrote:
>> I've been running Linux for more than ten years, but I'm pretty new to
>> OpenSolaris.
>>
>>
>> We want to put up a new website for videos, and we'd like a rock-solid stable
>> system, so I've been looking at OpenSolaris - and primarily OpenIndiana.
>>
>> We have a Dell PowerEdge 2900 with eight processing cores and 16GB Ram for
> this
>> project.
>>
>>
>> My question is this:
>>
>> When I have Linux loaded on this system, and I do a "top" command, it shows
>> something like 99.3% idle most of the time.
>>
>> When I have OpenIndiana, OpenSolaris, or Nexenta Core 3.0 on it, it shows more
>> like 80% idle, with the majority of the remaining 20% listed as "kernel".
>>
>> Is it normal for OpenSolaris to take up so much of the CPU just to run,
>> compared
>> to Linux?
> can't comment on Linux, but a few points about Solaris:
> - does what you see persist over time (minutes)?
> - prstat may show a different picture
> - prstat -m shows per-process microstate accounting, this may
> enlighten you about what's triggering the load (if it persists)
> - have a go at DTrace, something like
> # dtrace -n 'profile-10ms{@[stack()] = count()}'
> interrupted after a few seconds will show you what the kernel's
> doing most of the time
>
> HTH
try disabling speedstep ?
More information about the OpenIndiana-discuss
mailing list