[OpenIndiana-discuss] Zpool upgrade didn't seem to upgrade
Bernd Helber
bernd at helber-it-services.com
Thu Jan 20 08:26:13 UTC 2011
Good Morning Michelle.
Am 20.01.11 08:05, schrieb Michelle Knight:
> hi Folks,
>
> OI 148.
>
> Three 1.5tb drives were replaced with three 2tb drives. They are here on
> internal SATA channels c2t2d0, c2t3d0 and c2t4d0. One is a Seagate Barracuda
> and the other two are Western Digital Greens.
>
> mich at jaguar:~# cfgadm -lv
> Ap_Id Receptacle Occupant Condition
> Information
> When Type Busy Phys_Id
> Slot8 connected configured ok Location:
> Slot8
> Jan 1 1970 unknown n /devices/pci at 0,0/pci8086,3b4a at 1c,4:Slot8
> sata0/0::dsk/c2t0d0 connected configured ok Mod: INTEL
> SSDSA2M040G2GC FRev: 2CV102HB SN: CVGB949301PH040GGN
> unavailable disk n /devices/pci at 0,0/pci1458,b005 at 1f,2:0
> sata0/1::dsk/c2t1d0 connected configured ok Mod: INTEL
> SSDSA2M040G2GC FRev: 2CV102HB SN: CVGB949301PC040GGN
> unavailable disk n /devices/pci at 0,0/pci1458,b005 at 1f,2:1
> sata0/2::dsk/c2t2d0 connected configured ok Mod:
> ST32000542AS FRev: CC34 SN: 5XW17ARW
> unavailable disk n /devices/pci at 0,0/pci1458,b005 at 1f,2:2
> sata0/3::dsk/c2t3d0 connected configured ok Mod: WDC
> WD20EARS-00MVWB0 FRev: 51.0AB51 SN: WD-WMAZA0555575
> unavailable disk n /devices/pci at 0,0/pci1458,b005 at 1f,2:3
> sata0/4::dsk/c2t4d0 connected configured ok Mod: WDC
> WD20EARS-00MVWB0 FRev: 51.0AB51 SN: WD-WMAZA0484508
>
> A zpool export and subsequent import, which should have taken the set to 4tb
> overall storage in the raidz, appears to have not worked despite the import
> taking what must have been about ten to fifteen minutes to do the import.
> (during which time the drives were silent and the zpool process was mostly 0%
> very occasionally peaking to 25%, and the system being very slow to respond
> during that period)
>
Personally i assume the peaks were triggerd by resilvering the pool.
Its not uncommon that you have a high load, if your pool is resilvering.
Best Practice in this case would have been creating a new Zpool, e.g
raidz..
zfs send from $oldpool zfs receive $newpool... :-(
I have three Questions for you.
First.. is this a production Box?
Second.. could you provide the output of zpool history?
Third, no offence, but do you have proper Literature for ZFS?
If not please have a look at Solarisinternals
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Or have a look at the Open Solaris Bible.
Fourth... what would you like to achieve with OI ?
Sorry now i made four Questions out of it. ;)
Cheers :-)
> Any ideas please? Or is there still some process running in the background
> that I can't see?
>
> mich at jaguar:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> data 2.27T 401G 2.27T /mirror
> rpool 7.69G 28.7G 45K /rpool
> rpool/ROOT 3.70G 28.7G 31K legacy
> rpool/ROOT/openindiana 3.70G 28.7G 3.59G /
> rpool/dump 1.93G 28.7G 1.93G -
> rpool/export 5.22M 28.7G 32K /export
> rpool/export/home 5.19M 28.7G 32K /export/home
> rpool/export/home/mich 5.16M 28.7G 5.16M /export/home/mich
> rpool/swap 2.05G 30.7G 126M -
>
>
> mich at jaguar:~# zpool status
> pool: data
> state: ONLINE
> scan: resilvered 1.13T in 12h26m with 0 errors on Wed Jan 19 23:42:23 2011
> config:
>
> NAME STATE READ WRITE CKSUM
> data ONLINE 0 0 0
> raidz1-0 ONLINE 0 0 0
> c2t2d0 ONLINE 0 0 0
> c2t3d0 ONLINE 0 0 0
> c2t4d0 ONLINE 0 0 0
>
> errors: No known data errors
>
That took a very long time, for only 1 TB
>
> last pid: 1802; load avg: 0.61, 0.56, 0.61; up 0+19:55:20
> 07:06:01
> 74 processes: 73 sleeping, 1 on cpu
> CPU states: 99.8% idle, 0.0% user, 0.3% kernel, 0.0% iowait, 0.0% swap
> Kernel: 375 ctxsw, 653 intr, 120 syscall
> Memory: 3959M phys mem, 401M free mem, 1979M total swap, 1979M free swap
>
> PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
> 1197 gdm 1 59 0 95M 28M sleep 1:23 0.04% gdm-simple-gree
> 922 root 3 59 0 102M 51M sleep 0:37 0.02% Xorg
> 1801 root 1 59 0 4036K 2460K cpu/3 0:00 0.01% top
> 1196 gdm 1 59 0 80M 13M sleep 0:00 0.00% metacity
> 1190 gdm 1 59 0 7892K 6028K sleep 0:00 0.00% at-spi-registry
> 640 root 16 59 0 14M 9072K sleep 0:09 0.00% smbd
> 1737 mich 1 59 0 13M 5392K sleep 0:00 0.00% sshd
> 148 root 1 59 0 8312K 1608K sleep 0:00 0.00% dhcpagent
> 672 root 26 59 0 27M 15M sleep 0:01 0.00% fmd
> 1198 gdm 1 59 0 87M 18M sleep 0:02 0.00% gnome-power-man
> 1247 root 1 59 0 6080K 2500K sleep 0:00 0.00% sendmail
> 11 root 21 59 0 15M 13M sleep 0:06 0.00% svc.configd
> 272 root 6 59 0 11M 4784K sleep 0:01 0.00% devfsadm
> 1220 root 24 59 0 13M 4404K sleep 0:01 0.00% nscd
> 45 netcfg 5 59 0 4716K 3268K sleep 0:00 0.00% netcfgd
> 1309 admin 1 59 0 16M 8404K sleep 14:30 0.00% sshd
> 1312 admin 1 59 0 16M 5468K sleep 3:47 0.00% sshd
> 787 root 1 59 0 12M 5928K sleep 0:05 0.00% intrd
> 1237 root 4 59 0 8400K 1936K sleep 0:03 0.00% automountd
> 9 root 15 59 0 20M 12M sleep 0:02 0.00% svc.startd
> 387 root 5 59 0 7500K 6024K sleep 0:01 0.00% hald
> 1308 root 1 59 0 13M 4788K sleep 0:01 0.00% sshd
> 252 root 5 60 -20 2544K 1460K sleep 0:00 0.00% zonestatd
> 1191 gdm 1 59 0 111M 42M sleep 0:00 0.00% gnome-settings-
> 1175 gdm 2 59 0 20M 10M sleep 0:00 0.00% gnome-session
> 790 root 4 59 0 15M 6432K sleep 0:00 0.00% rad
> 1188 gdm 1 59 0 12M 5944K sleep 0:00 0.00% gconfd-2
> 291 root 1 59 0 16M 5604K sleep 0:00 0.00% cupsd
> 1156 mich 1 59 0 13M 5432K sleep 0:00 0.00% sshd
> 132 daemon 3 59 0 13M 5104K sleep 0:00 0.00% kcfd
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
--
with kind regards
_.-|-/\-._
\-' '-.
/ /\ /\ \/
\/ < . > ./. \/
_ / < > /___\ |.
.< \ / < /\ > ( #) |#)
| | < /\ -. __\
\ < < V > )./_._(\
.)/\ < < .- / \_'_) )-..
\ < ./ / > > /._./
/\ < '-' > > /
'-._ < v > _.-'
/ '-.______.·' \
\/
***********************************************************
*This message has been scanned by DrWeb AV and Spamassassin
More information about the OpenIndiana-discuss
mailing list