[OpenIndiana-discuss] Puzzling behaviour in zpools

Len Zaifman leonardz at sickkids.ca
Mon Mar 4 18:52:35 UTC 2013


I have a system which I am configuring for maximum space to use as a low cost backup service. It has an Areca raid card , 24 2 TB drives (Format reports 1.82 TB)  and 12 4 TB drives (format reports 3.64 TB).

using 2 2 TB drives for a mirrored rpool, I create a 2 TB pool as 2 x  11 disk raidz2 vdevs and a 4 TB pool as 1 x 12 disk raidz2 vdev

I formatted the 2 TB disks using SMI labels and 4 TB disks using EFI labels.

To my surprise, the 4 TB pool is smaller than the 2 TB pool, although I expected it to be 10% larger. Does anyone know why this happened? I think I am ~ 7 TB short on the 4 TB pool.

Any advice is welcome. Thank you.

Details:
OS: OpenIndiana Development oi_151a X86


df shows

rpool/ROOT/openindiana
                      1.8T  1.6G  1.8T   1% /
rpool/smallbkup       1.6T   31K  1.6T   1% /rpool/smallbkup
ccmbkup12TB            32T   68K   32T   1% /ccmbkup12TB
fourTBpool             29T   56K   29T   1% /fourTBpool    <<< I think I am 7 TB short here

zpool status shows


  pool: ccmbkup12TB
config:

        NAME        STATE     READ WRITE CKSUM
        ccmbkup12TB  ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            c2t0d2  ONLINE       0     0     0
            c2t0d3  ONLINE       0     0     0
            c2t0d4  ONLINE       0     0     0
            c2t0d5  ONLINE       0     0     0
            c2t0d6  ONLINE       0     0     0
            c2t0d7  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t1d1  ONLINE       0     0     0
            c2t1d2  ONLINE       0     0     0
            c2t1d3  ONLINE       0     0     0
            c2t1d4  ONLINE       0     0     0
          raidz2-1  ONLINE       0     0     0
            c2t1d5  ONLINE       0     0     0
            c2t1d6  ONLINE       0     0     0
            c2t1d7  ONLINE       0     0     0
            c2t2d0  ONLINE       0     0     0
            c2t2d1  ONLINE       0     0     0
            c2t2d2  ONLINE       0     0     0
            c2t2d3  ONLINE       0     0     0
            c2t2d4  ONLINE       0     0     0
            c2t2d5  ONLINE       0     0     0
            c2t2d6  ONLINE       0     0     0
            c2t2d7  ONLINE       0     0     0

  pool: fourTBpool
config:

        NAME        STATE     READ WRITE CKSUM
        fourTBpool  ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0
            c2t3d1  ONLINE       0     0     0
            c2t3d2  ONLINE       0     0     0
            c2t3d3  ONLINE       0     0     0
            c2t3d4  ONLINE       0     0     0
            c2t3d5  ONLINE       0     0     0
          raidz2-1  ONLINE       0     0     0
            c2t3d6  ONLINE       0     0     0
            c2t3d7  ONLINE       0     0     0
            c2t4d0  ONLINE       0     0     0
            c2t4d1  ONLINE       0     0     0
            c2t4d2  ONLINE       0     0     0
            c2t4d3  ONLINE       0     0     0

  pool: rpool
 state: ONLINE
  scan: resilvered 17.7G in 0h2m with 0 errors on Mon Mar  4 09:34:18 2013
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c2t0d0s0  ONLINE       0     0     0
            c2t0d1s0  ONLINE       0     0     0

and format output shows:

       0. c2t0d0 <Areca-rpool080-VOL#000-R001 cyl 60785 alt 2 hd 255 sec 252>
       1. c2t0d1 <Areca-rpool180-VOL#001-R001 cyl 60785 alt 2 hd 255 sec 252>  rpoolbk
       2. c2t0d2 <Areca-ARC-1880-VOL#002-R001-1.82TB>
       3. c2t0d3 <Areca-ARC-1880-VOL#003-R001-1.82TB>
       4. c2t0d4 <Areca-ARC-1880-VOL#004-R001-1.82TB>
       5. c2t0d5 <Areca-ARC-1880-VOL#005-R001-1.82TB>
       6. c2t0d6 <Areca-ARC-1880-VOL#006-R001-1.82TB>
       7. c2t0d7 <Areca-ARC-1880-VOL#007-R001-1.82TB>
       8. c2t1d0 <Areca-ARC-1880-VOL#008-R001-1.82TB>
       9. c2t1d1 <Areca-ARC-1880-VOL#009-R001-1.82TB>
      10. c2t1d2 <Areca-ARC-1880-VOL#010-R001-1.82TB>
      11. c2t1d3 <Areca-ARC-1880-VOL#011-R001-1.82TB>
      12. c2t1d4 <Areca-ARC-1880-VOL#012-R001-1.82TB>
      13. c2t1d5 <Areca-ARC-1880-VOL#013-R001-1.82TB>
      14. c2t1d6 <Areca-ARC-1880-VOL#014-R001-1.82TB>
      15. c2t1d7 <Areca-ARC-1880-VOL#015-R001-1.82TB>
      16. c2t2d0 <Areca-ARC-1880-VOL#016-R001-1.82TB>
      17. c2t2d1 <Areca-ARC-1880-VOL#017-R001-1.82TB>
      18. c2t2d2 <Areca-ARC-1880-VOL#018-R001-1.82TB>
      19. c2t2d3 <Areca-ARC-1880-VOL#019-R001-1.82TB>
      20. c2t2d4 <Areca-ARC-1880-VOL#020-R001-1.82TB>
      21. c2t2d5 <Areca-ARC-1880-VOL#021-R001-1.82TB>
      22. c2t2d6 <Areca-ARC-1880-VOL#022-R001-1.82TB>
      23. c2t2d7 <Areca-ARC-1880-VOL#023-R001-1.82TB>
      24. c2t3d0 <Areca-ARC-1880-VOL#024-R001-3.64TB>
      25. c2t3d1 <Areca-ARC-1880-VOL#025-R001-3.64TB>
      26. c2t3d2 <Areca-ARC-1880-VOL#026-R001-3.64TB>
      27. c2t3d3 <Areca-ARC-1880-VOL#027-R001-3.64TB>
      28. c2t3d4 <Areca-ARC-1880-VOL#028-R001-3.64TB>
      29. c2t3d5 <Areca-ARC-1880-VOL#029-R001-3.64TB>
      30. c2t3d6 <Areca-ARC-1880-VOL#030-R001-3.64TB>
      31. c2t3d7 <Areca-ARC-1880-VOL#031-R001-3.64TB>
      32. c2t4d0 <Areca-ARC-1880-VOL#032-R001-3.64TB>
      33. c2t4d1 <Areca-ARC-1880-VOL#033-R001-3.64TB>
      34. c2t4d2 <Areca-ARC-1880-VOL#034-R001-3.64TB>
      35. c2t4d3 <Areca-ARC-1880-VOL#035-R001-3.64TB>


Len Zaifman
Systems Manager, High Performance Systems
The Centre for Computational Medicine
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8

tel: 416-813-5513
email: leonardz at sickkids.ca

________________________________

This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies.



More information about the OpenIndiana-discuss mailing list