[OpenIndiana-discuss] Sudden ZFS performance issue
wim at vandenberge.us
wim at vandenberge.us
Sat Jul 6 03:52:45 UTC 2013
Here is the output for "iostat -xn" for the smaller of the two servers/pools
(80TB). The three c8 drives with the 0 across the board are the hot spares.
Nothing jumps out at me. The c5 drives are boot (mirror), ZIL (mirror) and L2ARC
(stripe)
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.1 0.8 0.1 1.6 0.0 0.0 0.0 0.3 0 0 c5t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t3d0
36.6 162.6 420.0 18840.0 0.0 0.6 0.0 3.2 0 8 c5t4d0
51.3 160.5 719.9 18236.1 0.0 0.6 0.0 2.9 0 8 c5t5d0
867.6 0.6 2558.1 0.4 0.0 0.9 0.0 1.0 0 29
c8t5000C50056592673d0
778.3 6.7 2065.3 8.1 0.0 0.7 0.0 0.9 0 28
c8t5000C50056571453d0
938.3 0.6 2585.0 0.4 0.0 0.7 0.0 0.7 0 24
c8t5000C5005658F473d0
982.4 0.6 2602.4 0.4 0.0 0.6 0.0 0.6 0 24
c8t5000C5005658F8C3d0
866.8 6.9 2197.2 10.0 0.0 0.7 0.0 0.8 0 27
c8t5000C5005652C613d0
827.9 6.5 2300.4 9.8 0.0 0.9 0.0 1.0 0 29
c8t5000C5005655A633d0
862.3 6.6 2290.2 9.8 0.0 0.7 0.0 0.9 0 28
c8t5000C5005655B553d0
868.2 6.3 2304.4 9.5 0.0 0.7 0.0 0.9 0 26
c8t5000C500565659D3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
c8t5000C50056593873d0
985.4 5.2 2545.8 6.5 0.0 0.7 0.0 0.7 0 25
c8t5000C5005657D007d0
990.3 5.8 2442.8 6.4 0.0 0.7 0.0 0.7 0 26
c8t5000C5005657AD37d0
963.7 5.6 2406.0 6.5 0.0 0.7 0.0 0.8 0 25
c8t5000C5005658E2B7d0
976.3 0.6 2627.6 0.4 0.0 0.6 0.0 0.6 0 23
c8t5000C50056590B3Bd0
917.4 0.6 2523.0 0.4 0.0 0.7 0.0 0.7 0 24
c8t5000C5005658E3ABd0
929.3 0.6 2496.6 0.4 0.0 0.8 0.0 0.9 0 30
c8t5000C5005659283Bd0
862.4 6.5 2189.0 9.7 0.0 0.8 0.0 0.9 0 28
c8t5000C500565630FBd0
938.0 0.6 2604.8 0.4 0.0 0.7 0.0 0.7 0 24
c8t5000C5005659135Bd0
886.1 0.6 2492.7 0.4 0.0 0.9 0.0 1.1 0 31
c8t5000C5005659248Fd0
955.6 5.7 2466.4 7.1 0.0 0.8 0.0 0.8 0 26
c8t5000C500565880EFd0
857.4 0.6 2538.8 0.4 0.0 0.9 0.0 1.1 0 31
c8t5000C50056592ACFd0
831.5 0.6 2461.6 0.4 0.0 1.1 0.0 1.4 0 33
c8t5000C50056591F5Fd0
856.8 0.6 2422.2 0.4 0.0 0.9 0.0 1.0 0 29
c8t5000C5005659255Fd0
1011.3 5.4 2471.2 6.5 0.0 0.7 0.0 0.7 0 25
c8t5000C5005658E17Fd0
975.7 5.2 2481.4 6.8 0.0 0.7 0.0 0.8 0 26
c8t5000C5005658DB3Fd0
859.2 6.3 2289.7 9.5 0.0 0.8 0.0 0.9 0 27
c8t5000C50056561053d0
779.8 6.6 1961.4 8.1 0.0 0.7 0.0 0.9 0 26
c8t5000C50056577043d0
917.4 0.6 2623.7 0.4 0.0 0.7 0.0 0.8 0 25
c8t5000C5005658E6A3d0
848.6 7.0 2163.1 9.7 0.0 0.8 0.0 0.9 0 28
c8t5000C5005655A603d0
741.9 7.1 1960.3 8.0 0.0 0.8 0.0 1.0 0 27
c8t5000C50056576573d0
786.5 5.6 1889.8 8.1 0.0 0.7 0.0 0.9 0 26
c8t5000C5005657A2E3d0
952.2 0.6 2629.5 0.4 0.0 0.7 0.0 0.8 0 25
c8t5000C500565909E3d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
c8t5000C50056594023d0
988.6 5.9 2526.3 6.9 0.0 0.8 0.0 0.8 0 26
c8t5000C5005657F3E3d0
775.0 6.9 2067.2 8.1 0.0 0.7 0.0 1.0 0 26
c8t5000C50056579ED7d0
862.6 6.8 2307.5 10.0 0.0 0.9 0.0 1.0 0 29
c8t5000C5005652AE27d0
876.8 0.6 2497.3 0.4 0.0 1.0 0.0 1.1 0 31
c8t5000C500565929B7d0
976.0 5.1 2517.6 6.3 0.0 0.7 0.0 0.7 0 26
c8t5000C5005658E25Bd0
714.6 6.8 2000.5 8.0 0.0 0.9 0.0 1.2 0 29
c8t5000C50056569ADBd0
928.2 0.6 2660.8 0.4 0.0 0.7 0.0 0.8 0 24
c8t5000C5005658EF5Bd0
858.8 0.6 2421.0 0.4 0.0 0.9 0.0 1.1 0 30
c8t5000C500565924CBd0
818.0 6.3 2188.1 9.5 0.0 0.9 0.0 1.1 0 30
c8t5000C5005656443Bd0
904.5 0.6 2603.6 0.4 0.0 0.9 0.0 1.0 0 31
c8t5000C5005659272Bd0
735.2 6.4 1993.1 7.8 0.0 0.8 0.0 1.1 0 27
c8t5000C500565776AFd0
794.2 7.3 1995.3 8.0 0.0 0.7 0.0 0.8 0 25
c8t5000C5005657640Fd0
961.9 5.5 2509.9 6.6 0.0 0.7 0.0 0.7 0 26
c8t5000C5005658E0BFd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
c8t5000C50056593E3Fd0
791.0 6.9 2105.8 7.8 0.0 0.8 0.0 1.0 0 30
c8t5000C500565696FFd0
916.8 0.6 2637.3 0.4 0.0 0.7 0.0 0.8 0 25
c8t5000C50056590E1Fd0
> On July 5, 2013 at 4:27 PM Saso Kiselkov <skiselkov.ml at gmail.com> wrote:
>
>
> On 05/07/2013 21:00, wim at vandenberge.us wrote:
> > Latencytop reports the following continuously for the pool process and it
> > doesn’t change significantly under load, which looks ok to me:
> >
> > genunix`cv_wait genunix`taskq_threa 14491 9.6 msec 6.2 sec 99.3 %
> >
> > Wait for available CPU 14618 44.9 usec 9.8 msec 0.5 %
> >
> > Adapt. lock spin 14501 24.2 usec 1.7 msec 0.3 %
> >
> > genunix`turnstile_block unix`mutex_ 48 174.5 usec 453.9 usec 0.0 %
> >
> > Spinlock spin 194 35.4 usec 309.0 usec 0.0 %
> >
> > genunix`turnstile_block unix`mutex_ 20 315.5 usec 3.1 msec 0.0 %
> >
> > genunix`turnstile_block unix`mutex_ 62 56.1 usec 124.5 usec 0.0 %
> >
> > I realize 64GB is low for this size of storage. would it be conceivable that
> > I
> > reached some threshold there?
>
> What's 'iostat -xn' showing in the asvc_t and %b columns? Isn't some
> disk taking an excessive amount of time to fulfill I/O requests? For
> example one of the disks in the pool showing 100% busy and very large
> service times can slow down the whole raidz and by extension the pool
> (all blocks read from the raidz will incur a significant hit).
>
> Cheers,
> --
> Saso
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the OpenIndiana-discuss
mailing list