[OpenIndiana-discuss] Puzzling behaviour in zpools
Len Zaifman
leonardz at sickkids.ca
Mon Mar 4 19:47:54 UTC 2013
RE: Jan's response:
I am sorry I was not clear.
For the 2 TB disks there are 2 11 disk vdevs , so 18 data disks , 4 parity disks
For the 4 TB disks there is only 1 12 disk vdev so 10 data disks, 2 parity disks, not 4.
I am optimising for space, not performance.
so in raw disk space , 18 data disks is 18 * 1.82 ~ 32.8 TB
for the 4 TB disks 10 data disks is 10 * 3.64 ~ 36.4 TB
I still claim I am 7 TB short. The problem is I did what Jan said, rather than what I had planned on.
The problem was I did make 2 vdevs, not one. With one vdev I get exactly what I wanted.
cmbkup14TB 36T 70K 36T 1% /ccmbkup14TB
Sorry for wasting the bandwidth.
[OpenIndiana-discuss] Puzzling behaviour in zpools
Jan Owoc jsowoc at gmail.com
Mon Mar 4 19:22:12 UTC 2013
Previous message: [OpenIndiana-discuss] Puzzling behaviour in zpools
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hi Len,
On Mon, Mar 4, 2013 at 11:52 AM, Len Zaifman <leonardz at sickkids.ca> wrote:
> I have a system which I am configuring for maximum space to use as a low cost backup service. It has an Areca raid card , 24 2 TB drives (Format reports 1.82 TB) and 12 4 TB drives (format reports 3.64 TB).
Yes, that is to be expected. Manufacturers say "4TB" meaning "4 000
000 000 000" bytes. Many operating systems say "3.64 TiB" meaning
"3.64 * 1024 * 1024 * 1024 * 1024". They are two different ways of
expressing the same capacity.
> using 2 2 TB drives for a mirrored rpool, I create a 2 TB pool as 2 x 11 disk raidz2 vdevs and a 4 TB pool as 1 x 12 disk raidz2 vdev
Ok, so for each set of raidz2 disks, 2 are for parity, so you have 9
for data. 2 vdevs * 9 disks * 2TB = 36.0 TB = 32.7 TiB
On the other pool, for each set of raidz2 disks, 2 are for parity, so
you have 4 for data. 2 vdevs * 4 disks * 4TB = 32.0 TB = 29.1 TiB
> df shows
>
> rpool/ROOT/openindiana
> 1.8T 1.6G 1.8T 1% /
> rpool/smallbkup 1.6T 31K 1.6T 1% /rpool/smallbkup
> ccmbkup12TB 32T 68K 32T 1% /ccmbkup12TB
> fourTBpool 29T 56K 29T 1% /fourTBpool <<< I think I am 7 TB short here
See calculation above :-). It matches up exactly.
As your pools fill up, people suggest using "zfs list" to get actual
usable capacity, as df gets confused by compression, deduplication,
and other things. Your pools are empty, so the numbers happen to
match.
Cheers,
Jan
Len Zaifman
Systems Manager, High Performance Systems
The Centre for Computational Medicine
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8
tel: 416-813-5513
email: leonardz at sickkids.ca
________________________________________
From: Len Zaifman
Sent: March 4, 2013 13:52
To: openindiana-discuss at openindiana.org
Subject: Puzzling behaviour in zpools
I have a system which I am configuring for maximum space to use as a low cost backup service. It has an Areca raid card , 24 2 TB drives (Format reports 1.82 TB) and 12 4 TB drives (format reports 3.64 TB).
using 2 2 TB drives for a mirrored rpool, I create a 2 TB pool as 2 x 11 disk raidz2 vdevs and a 4 TB pool as 1 x 12 disk raidz2 vdev
I formatted the 2 TB disks using SMI labels and 4 TB disks using EFI labels.
To my surprise, the 4 TB pool is smaller than the 2 TB pool, although I expected it to be 10% larger. Does anyone know why this happened? I think I am ~ 7 TB short on the 4 TB pool.
Any advice is welcome. Thank you.
Details:
OS: OpenIndiana Development oi_151a X86
df shows
rpool/ROOT/openindiana
1.8T 1.6G 1.8T 1% /
rpool/smallbkup 1.6T 31K 1.6T 1% /rpool/smallbkup
ccmbkup12TB 32T 68K 32T 1% /ccmbkup12TB
fourTBpool 29T 56K 29T 1% /fourTBpool <<< I think I am 7 TB short here
zpool status shows
pool: ccmbkup12TB
config:
NAME STATE READ WRITE CKSUM
ccmbkup12TB ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c2t0d2 ONLINE 0 0 0
c2t0d3 ONLINE 0 0 0
c2t0d4 ONLINE 0 0 0
c2t0d5 ONLINE 0 0 0
c2t0d6 ONLINE 0 0 0
c2t0d7 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t1d1 ONLINE 0 0 0
c2t1d2 ONLINE 0 0 0
c2t1d3 ONLINE 0 0 0
c2t1d4 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
c2t1d5 ONLINE 0 0 0
c2t1d6 ONLINE 0 0 0
c2t1d7 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
c2t2d1 ONLINE 0 0 0
c2t2d2 ONLINE 0 0 0
c2t2d3 ONLINE 0 0 0
c2t2d4 ONLINE 0 0 0
c2t2d5 ONLINE 0 0 0
c2t2d6 ONLINE 0 0 0
c2t2d7 ONLINE 0 0 0
pool: fourTBpool
config:
NAME STATE READ WRITE CKSUM
fourTBpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
c2t3d1 ONLINE 0 0 0
c2t3d2 ONLINE 0 0 0
c2t3d3 ONLINE 0 0 0
c2t3d4 ONLINE 0 0 0
c2t3d5 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
c2t3d6 ONLINE 0 0 0
c2t3d7 ONLINE 0 0 0
c2t4d0 ONLINE 0 0 0
c2t4d1 ONLINE 0 0 0
c2t4d2 ONLINE 0 0 0
c2t4d3 ONLINE 0 0 0
pool: rpool
state: ONLINE
scan: resilvered 17.7G in 0h2m with 0 errors on Mon Mar 4 09:34:18 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t0d0s0 ONLINE 0 0 0
c2t0d1s0 ONLINE 0 0 0
and format output shows:
0. c2t0d0 <Areca-rpool080-VOL#000-R001 cyl 60785 alt 2 hd 255 sec 252>
1. c2t0d1 <Areca-rpool180-VOL#001-R001 cyl 60785 alt 2 hd 255 sec 252> rpoolbk
2. c2t0d2 <Areca-ARC-1880-VOL#002-R001-1.82TB>
3. c2t0d3 <Areca-ARC-1880-VOL#003-R001-1.82TB>
4. c2t0d4 <Areca-ARC-1880-VOL#004-R001-1.82TB>
5. c2t0d5 <Areca-ARC-1880-VOL#005-R001-1.82TB>
6. c2t0d6 <Areca-ARC-1880-VOL#006-R001-1.82TB>
7. c2t0d7 <Areca-ARC-1880-VOL#007-R001-1.82TB>
8. c2t1d0 <Areca-ARC-1880-VOL#008-R001-1.82TB>
9. c2t1d1 <Areca-ARC-1880-VOL#009-R001-1.82TB>
10. c2t1d2 <Areca-ARC-1880-VOL#010-R001-1.82TB>
11. c2t1d3 <Areca-ARC-1880-VOL#011-R001-1.82TB>
12. c2t1d4 <Areca-ARC-1880-VOL#012-R001-1.82TB>
13. c2t1d5 <Areca-ARC-1880-VOL#013-R001-1.82TB>
14. c2t1d6 <Areca-ARC-1880-VOL#014-R001-1.82TB>
15. c2t1d7 <Areca-ARC-1880-VOL#015-R001-1.82TB>
16. c2t2d0 <Areca-ARC-1880-VOL#016-R001-1.82TB>
17. c2t2d1 <Areca-ARC-1880-VOL#017-R001-1.82TB>
18. c2t2d2 <Areca-ARC-1880-VOL#018-R001-1.82TB>
19. c2t2d3 <Areca-ARC-1880-VOL#019-R001-1.82TB>
20. c2t2d4 <Areca-ARC-1880-VOL#020-R001-1.82TB>
21. c2t2d5 <Areca-ARC-1880-VOL#021-R001-1.82TB>
22. c2t2d6 <Areca-ARC-1880-VOL#022-R001-1.82TB>
23. c2t2d7 <Areca-ARC-1880-VOL#023-R001-1.82TB>
24. c2t3d0 <Areca-ARC-1880-VOL#024-R001-3.64TB>
25. c2t3d1 <Areca-ARC-1880-VOL#025-R001-3.64TB>
26. c2t3d2 <Areca-ARC-1880-VOL#026-R001-3.64TB>
27. c2t3d3 <Areca-ARC-1880-VOL#027-R001-3.64TB>
28. c2t3d4 <Areca-ARC-1880-VOL#028-R001-3.64TB>
29. c2t3d5 <Areca-ARC-1880-VOL#029-R001-3.64TB>
30. c2t3d6 <Areca-ARC-1880-VOL#030-R001-3.64TB>
31. c2t3d7 <Areca-ARC-1880-VOL#031-R001-3.64TB>
32. c2t4d0 <Areca-ARC-1880-VOL#032-R001-3.64TB>
33. c2t4d1 <Areca-ARC-1880-VOL#033-R001-3.64TB>
34. c2t4d2 <Areca-ARC-1880-VOL#034-R001-3.64TB>
35. c2t4d3 <Areca-ARC-1880-VOL#035-R001-3.64TB>
Len Zaifman
Systems Manager, High Performance Systems
The Centre for Computational Medicine
The Hospital for Sick Children
555 University Ave.
Toronto, Ont M5G 1X8
tel: 416-813-5513
email: leonardz at sickkids.ca
________________________________
This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies.
More information about the OpenIndiana-discuss
mailing list