[OpenIndiana-discuss] More: smbd core dumps and transitions to maintenance

Gabriele Bulfon gbulfon at sonicle.com
Wed Aug 29 15:03:30 UTC 2012


Ohps, I just checked the diffs between my illumos repo and the current one, and I found this:
changeset:   13595:565bd3085959
user:        Gordon Ross
date:        Sat Feb 04 15:55:57 2012 -0500
summary:     2041 panic in nsmb_close
..........................maybe we were going the wrong way?
----------------------------------------------------------------------------------
Da: Yuri Pankov
A: Discussion list for OpenIndiana
Data: 29 agosto 2012 15.18.19 CEST
Oggetto: Re: [OpenIndiana-discuss] More: smbd core dumps and transitions to maintenance
On Wed, 29 Aug 2012 14:59:54 +0200 (CEST), Gabriele Bulfon wrote:
I restarted the machine, attached truss to the "smbd start" process, tried accessing the share
from a machine, and there it goes the SIGSEGV:
/40:    fcntl(18, F_SETLK64, 0xFC6F7730)                = 0
/40:    access("/var/tmp/sqlite_HKvnaIkoketg7Hs-journal", F_OK) Err#2 ENOENT
/40:    fstat64(18, 0xFC6F76B0)                         = 0
/40:    fcntl(18, F_SETLK64, 0xFC6F7720)                = 0
/40:    fcntl(7, F_SETLK64, 0xFC6F73E0)                 = 0
/40:    access("/var/smb/smbgroup.db-journal", F_OK)    Err#2 ENOENT
/40:    fstat64(7, 0xFC6F7360)                          = 0
/40:    llseek(7, 0, SEEK_SET)                          = 0
/40:    read(7, " * *   T h i s   f i l e".., 1024)     = 1024
/40:    llseek(7, 7168, SEEK_SET)                       = 7168
/40:    read(7, "\0\0\0\0\b\0
/40:    llseek(7, 4096, SEEK_SET)                       = 4096
/40:    read(7, "\0\0\0\0\b\0 (\0\0\0\0\0".., 1024)     = 1024
/40:    fcntl(7, F_SETLK64, 0xFC6F73B0)                 = 0
/40:    close(7)                                        = 0
/40:    close(18)                                       = 0
/40:        Incurred fault #6, FLTBOUNDS  %pc = 0xFED8FED4
/40:          siginfo: SIGSEGV SEGV_ACCERR addr=0xFDF9C9A0
/40:        Received signal #11, SIGSEGV [default]
/40:          siginfo: SIGSEGV SEGV_ACCERR addr=0xFDF9C9A0
----------------------------------------------------------------------------------
Da: Gabriele Bulfon
A: Discussion list for OpenIndiana
Data: 29 agosto 2012 14.45.41 CEST
Oggetto: [OpenIndiana-discuss] smbd core dumps and transitions to maintenance
Hi,
looks like kdb5kdc was not the cause of problems (looks like...).
I created the principal through the kerberos tool, restarted the machine and it was all fine.
So I joined again my domani via smbadm, and after some trying to connect to the shares (falining),
I found again smb was down in maintenance.
I found many core.smbd.xxxxxxx files, but I can't find a reason for it....and I can't find smbd.log anywhere.
I tried starting smbd manually, but it fails requesting the smb.conf, that I never had....
here's the pstack of one core:
# pstack core.smbd.1346233592
core 'core.smbd.1346233592' of 24103:   /usr/lib/smbsrv/smbd start
-----------------  lwp# 1 / thread# 1  --------------------
feef43d5 __sigsuspend (8047dd0, 0, 8047e08, 8059510) + 15
08059572 main     (2, 8047e30, 8047e3c, 8056eda, 805c640, 0) + 28e
08056f3b _start   (2, 8047ed4, 8047ee9, 0, 8047eef, 8047f0a) + 83
-----------------  lwp# 2 / thread# 2  --------------------
feef4075 __pollsys (fe4eefa0, 2, 0, 0, 805aa68, 0) + 15
fee8d8fb poll     (fe4eefa0, 2, ffffffff, 805aa32) + 6b
0805aa68 smbd_nicmon_daemon (0, 0, 0, 0) + 88
feeef63d _thrp_setup (fed20a40) + 86
feeef8d0 _lwp_start (fed20a40, 0, 0, 0, 0, 0)
-----------------  lwp# 3 / thread# 3  --------------------
feeef929 __lwp_park (fed18808, fed187f0, 0, fef62000, fed21240, fed187f0) + 19
feee9b83 cond_wait_queue (fed18808, fed187f0, 0, feee83be, fed187f0, fed18000) + 6a
feeea0fe __cond_wait (fed18808, fed187f0, fed187fc, 1, fed18000, fed187f0) + 8b
feeea158 cond_wait (fed18808, fed187f0, 0, fef62000) + 2e
fecfb4a0 dyndns_publisher (0, 0, 0, 0) + ab
feeef63d _thrp_setup (fed21240) + 86
feeef8d0 _lwp_start (fed21240, 0, 0, 0, 0, 0)
-----------------  lwp# 4 / thread# 4  --------------------
feef3535 __nanosleep (fe2f0f98, fe2f0f90, fe2f0fb8, fed0430c, 0, 11cbe36e) + 15
feee05dd sleep    (1, 0, fe2f0fc8, fecfee43, 12, fef62000) + 3b
fecfee27 smb_netbios_service (0, 0, 0, 0) + f8
feeef63d _thrp_setup (fed21a40) + 86
feeef8d0 _lwp_start (fed21a40, 0, 0, 0, 0, 0)
-----------------  lwp# 5 / thread# 5  --------------------
feef3845 __so_recvfrom (a, 81b7288, 240, 0, 81cfe68, 81cfe78) + 15
fe81c173 recvfrom (a, 81b7288, 240, 0, 81cfe68, 81cfe78) + 2e
fed04c78 smb_netbios_name_service (0, 0, 0, 0) + 1da
feeef63d _thrp_setup (fed22240) + 86
feeef8d0 _lwp_start (fed22240, 0, 0, 0, 0, 0)
-----------------  lwp# 6 / thread# 6  --------------------
feeef929 __lwp_park (806eb38, 806eb20, fe0bef68, fef62000, fed22a40, fef67180) + 19
feee9b83 cond_wait_queue (806eb38, 806eb20, fe0bef68, feae9bc0, 81c8420, 11) + 6a
feeea022 cond_wait_common (806eb38, 806eb20, fe0bef68, feee83be, 806eb20, 0) + 266
feeea376 __cond_reltimedwait (806eb38, 806eb20, fe0befb0, feee8665, 0, 0) + 57
feeea3b9 cond_reltimedwait (806eb38, 806eb20, fe0befb0, 80587b0) + 35
08058729 smbd_dc_monitor (0, 0, 0, 0) + 63
feeef63d _thrp_setup (fed22a40) + 86
feeef8d0 _lwp_start (fed22a40, 0, 0, 0, 0, 0)
-----------------  lwp# 7 / thread# 7  --------------------
feeef929 __lwp_park (fdfb4fbc, fdfb4fa4, 0, fef62000, fed23240, fdfb4fa4) + 19
feee9b83 cond_wait_queue (fdfb4fbc, fdfb4fa4, 0, feee83be, fdfb4fa4, 0) + 6a
feeea0fe __cond_wait (fdfb4fbc, fdfb4fa4, fdeeed78, feee8665, fdfb1000, fdfb4fbc) + 8b
feeea158 cond_wait (fdfb4fbc, fdfb4fa4, 100, fdf500fe) + 2e
fdf50165 smb_ddiscover_service (0, 0, 0, 0) + 78
feeef63d _thrp_setup (fed23240) + 86
feeef8d0 _lwp_start (fed23240, 0, 0, 0, 0, 0)
-----------------  lwp# 8 / thread# 8  --------------------
feef3535 __nanosleep (fdd9eb38, fdd9eb30, fdd9eb58, fdf500e0, 230, 114039df) + 15
feee05dd sleep    (258, fdd9ed6c, fdd9efc8, fdf50849, 2d767273, 6966666f) + 3b
fdf50837 mlsvc_timecheck (0, 0, 0, 0) + 2b
feeef63d _thrp_setup (fed23a40) + 86
feeef8d0 _lwp_start (fed23a40, 0, 0, 0, 0, 0)
-----------------  lwp# 9 / thread# 9  --------------------
feef4dd6 __door_unref (3, fdc9ffa0, 0, feeeec49, ffffffff, ffffffff) + 26
feedb875 door_unref_func (5e27, 0, 0, 0) + 4e
feeef63d _thrp_setup (fed24240) + 86
feeef8d0 _lwp_start (fed24240, 0, 0, 0, 0, 0)
-----------------  lwp# 10 / thread# 10  --------------------
feef4e1e __door_return (fdba0c40, 24, 0, 0, 806e8e0, 0) + 2e
08057880 smbd_door_return (806e8e0, fdba0c40, 24, 0, 0, fdba0c40) + 59
080574d9 smbd_door_dispatch (806e7d8, fdba0d10, f0, 0, 0, 80572e3) + 1f6
feef4e3b __door_return () + 4b
-----------------  lwp# 11 / thread# 11  --------------------
feeef929 __lwp_park (806eb6c, 806eb7c, 0, fef62000, fed25240, 806eb7c) + 19
feee9b83 cond_wait_queue (806eb6c, 806eb7c, 0, 5d, 8051da0, 100) + 6a
feeea0fe __cond_wait (806eb6c, 806eb7c, fed25240, 0, fef62000, fed25240) + 8b
feeea158 cond_wait (806eb6c, 806eb7c, fdaa1f5c, 0, fdaa1fc8, fef62000) + 2e
feeea19b pthread_cond_wait (806eb6c, 806eb7c, fef69a80, fef62000) + 24
08059faf smbd_refresh_monitor (0, 0, 0, 0) + 4d
feeef63d _thrp_setup (fed25240) + 86
feeef8d0 _lwp_start (fed25240, 0, 0, 0, 0, 0)
-----------------  lwp# 12 / thread# 12  --------------------
feef3535 __nanosleep (fce0ef58, fce0ef50, fce0ef90, feecd160, 329, 11335f25) + 15
feee05dd sleep    (384, fce0ef90, fce0efc8, 805a46d, fef69a80, 0) + 3b
0805a4e5 smbd_localtime_monitor (0, 0, 0, 0) + 97
feeef63d _thrp_setup (fed25a40) + 86
feeef8d0 _lwp_start (fed25a40, 0, 0, 0, 0, 0)
-----------------  lwp# 13 / thread# 13  --------------------
feeef929 __lwp_park (fdfb59e8, fdfb59d0, 0, fef62000, fed26240, fdfb59d0) + 19
feee9b83 cond_wait_queue (fdfb59e8, fdfb59d0, 0, feee83be, fdfb59d0, fdfb1000) + 6a
feeea0fe __cond_wait (fdfb59e8, fdfb59d0, 0, fdfb59c0, fdfb1000, fdfb59d0) + 8b
feeea158 cond_wait (fdfb59e8, fdfb59d0, 0, 0) + 2e
fdf5bdc3 smb_shr_publisher (0, 0, 0, 0) + e9
feeef63d _thrp_setup (fed26240) + 86
feeef8d0 _lwp_start (fed26240, 0, 0, 0, 0, 0)
-----------------  lwp# 15 / thread# 15  --------------------
feef4e1e __door_return (0, 0, 0, 0, fed27240, fef62000) + 2e
feedc4d5 door_create_func (0, 0, 0, 0) + 4a
feeef63d _thrp_setup (fed27240) + 86
feeef8d0 _lwp_start (fed27240, 0, 0, 0, 0, 0)
-----------------  lwp# 17 / thread# 17  --------------------
feef3845 __so_recvfrom (12, 810161c, 240, 0, 8101328, 8101338) + 15
fe81c173 recvfrom (12, 810161c, 240, 0, 8101328, 8101338) + 2e
fed01740 smb_netbios_datagram_service (0, 0, fcbfffe8, feeec1fd) + 1ed
feeef63d _thrp_setup (fed26a40) + 86
feeef8d0 _lwp_start (fed26a40, 0, 0, 0, 0, 0)
-----------------  lwp# 18 / thread# 18  --------------------
feeef929 __lwp_park (fed18c58, fed18c40, fc95ff18, fef62000, fed27a40, fef67180) + 19
feee9b83 cond_wait_queue (fed18c58, fed18c40, fc95ff18, 0, 0, 0) + 6a
feeea022 cond_wait_common (fed18c58, fed18c40, fc95ff18, feee83be, fed18c40, 0) + 266
feeea376 __cond_reltimedwait (fed18c58, fed18c40, fc95ff68, feee8665, fed18000, 3c) + 57
feeea3b9 cond_reltimedwait (fed18c58, fed18c40, fc95ff68, feede8a8) + 35
fecff435 smb_netbios_sleep (3c, 8104748, fc95ffc8, fecfaebc) + 58
fecfaeb4 smb_browser_service (0, 0, 0, 0) + 144
feeef63d _thrp_setup (fed27a40) + 86
feeef8d0 _lwp_start (fed27a40, 0, 0, 0, 0, 0)
-----------------  lwp# 53 / thread# 53  --------------------
feeef929 __lwp_park (feb0e734, feb0e754, fc761f30, fef62000, fed28a40, fef67180) + 19
feee9b83 cond_wait_queue (feb0e734, feb0e754, fc761f30, feaebf39, 8074390, 809dc40) + 6a
feeea022 cond_wait_common (feb0e734, feb0e754, fc761f30, fef62000, fed28a40, fe4f0d40) + 266
feeea227 __cond_timedwait (feb0e734, feb0e754, fc761fa8, feb05000, feb05000, feb07478) + 7b
feeea2b4 cond_timedwait (feb0e734, feb0e754) + 35
feae8536 umem_update_thread (0, 0, fc761fe8, feeec1fd) + 206
feeef63d _thrp_setup (fed28a40) + 86
feeef8d0 _lwp_start (fed28a40, 0, 0, 0, 0, 0)
-----------------  lwp# 441 / thread# 441  --------------------
fed8fed4 trim_whitespace (6f746172, 50007372, 7265776f, 65735520, 42007372, 756b6361) + 4d
7473696e ???????? ()
I wonder what OI version you are using, we had a problem with libsmb
compiled using gcc (https://www.illumos.org/issues/1863), but it was
fixed long ago. Additional details would help here.
_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss at openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


More information about the OpenIndiana-discuss mailing list