[OpenIndiana-discuss] Slow zfs writes

Jason Matthews jason at broken.net
Tue Feb 12 01:09:49 UTC 2013



I am going to offer to obvious advice...

How full is your pool?  Zpool performance degrades as the pool fills up and
the tools don't tell you how close you are to the cliff -- you find the
cliff on your own by falling off of it. As a rule of thumb, I keep
production system less than 70% utilized.

Here is a real life example. On a 14.5TB (configured) pool, I found the
cliff with 250+GB still reported as free. The system continued to write to
the pool but throughput dismal.

Is your pool full?

j.

-----Original Message-----
From: Ram Chander [mailto:ramquick at gmail.com] 
Sent: Monday, February 11, 2013 4:48 AM
To: Discussion list for OpenIndiana
Subject: [OpenIndiana-discuss] Slow zfs writes

Hi,

My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.


# zpool status -v
root at host:~# zpool status -v
  pool: test
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support
feature flags.
config:

        NAME         STATE     READ WRITE CKSUM
        test       ONLINE       0     0     0
          raidz1-0   ONLINE       0     0     0
            c2t0d0   ONLINE       0     0     0
            c2t1d0   ONLINE       0     0     0
            c2t2d0   ONLINE       0     0     0
            c2t3d0   ONLINE       0     0     0
            c2t4d0   ONLINE       0     0     0
          raidz1-1   ONLINE       0     0     0
            c2t5d0   ONLINE       0     0     0
            c2t6d0   ONLINE       0     0     0
            c2t7d0   ONLINE       0     0     0
            c2t8d0   ONLINE       0     0     0
            c2t9d0   ONLINE       0     0     0
          raidz1-3   ONLINE       0     0     0
            c2t12d0  ONLINE       0     0     0
            c2t13d0  ONLINE       0     0     0
            c2t14d0  ONLINE       0     0     0
            c2t15d0  ONLINE       0     0     0
            c2t16d0  ONLINE       0     0     0
            c2t17d0  ONLINE       0     0     0
            c2t18d0  ONLINE       0     0     0
            c2t19d0  ONLINE       0     0     0
            c2t20d0  ONLINE       0     0     0
            c2t21d0  ONLINE       0     0     0
            c2t22d0  ONLINE       0     0     0
            c2t23d0  ONLINE       0     0     0
          raidz1-4   ONLINE       0     0     0
            c2t24d0  ONLINE       0     0     0
            c2t25d0  ONLINE       0     0     0
            c2t26d0  ONLINE       0     0     0
            c2t27d0  ONLINE       0     0     0
            c2t28d0  ONLINE       0     0     0
            c2t29d0  ONLINE       0     0     0
            c2t30d0  ONLINE       0     0     0
          raidz1-5   ONLINE       0     0     0
            c2t31d0  ONLINE       0     0     0
            c2t32d0  ONLINE       0     0     0
            c2t33d0  ONLINE       0     0     0
            c2t34d0  ONLINE       0     0     0
            c2t35d0  ONLINE       0     0     0
            c2t36d0  ONLINE       0     0     0
            c2t37d0  ONLINE       0     0     0
          raidz1-6   ONLINE       0     0     0
            c2t38d0  ONLINE       0     0     0
            c2t39d0  ONLINE       0     0     0
            c2t40d0  ONLINE       0     0     0
            c2t41d0  ONLINE       0     0     0
            c2t42d0  ONLINE       0     0     0
            c2t43d0  ONLINE       0     0     0
            c2t44d0  ONLINE       0     0     0
        spares
          c5t10d0    AVAIL
          c5t11d0    AVAIL
          c2t45d0    AVAIL
          c2t46d0    AVAIL
          c2t47d0    AVAIL



# iostat -En

c4t0d0           Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Vendor: iDRAC    Product: Virtual CD       Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c3t0d0           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRAC    Product: LCDRIVE          Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t0d1           Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRAC    Product: Virtual Floppy   Revision: 0323 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0


root at host:~# fmadm faulty
--------------- ------------------------------------  --------------
---------
TIME            EVENT-ID                              MSG-ID
SEVERITY
--------------- ------------------------------------  --------------
---------
Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a  ZFS-8000-HC
Major

Host        : host
Platform    : PowerEdge-R810
Product_sn  :

Fault class : fault.fs.zfs.io_failure_wait
Affects     : zfs://pool=test
                  faulted but still in service
Problem in  : zfs://pool=test
                  faulted but still in service

Description : The ZFS pool has experienced currently unrecoverable I/O
                    failures.  Refer to
http://illumos.org/msg/ZFS-8000-HCfor
              more information.

Response    : No automated response will be taken.

Impact      : Read and write I/Os cannot be serviced.

Action      : Make sure the affected devices are connected, then run
                    'zpool clear'.
_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss at openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


More information about the OpenIndiana-discuss mailing list