[OpenIndiana-discuss] Puzzling behaviour in zpools
Jan Owoc
jsowoc at gmail.com
Mon Mar 4 19:22:12 UTC 2013
Hi Len,
On Mon, Mar 4, 2013 at 11:52 AM, Len Zaifman <leonardz at sickkids.ca> wrote:
> I have a system which I am configuring for maximum space to use as a low cost backup service. It has an Areca raid card , 24 2 TB drives (Format reports 1.82 TB) and 12 4 TB drives (format reports 3.64 TB).
Yes, that is to be expected. Manufacturers say "4TB" meaning "4 000
000 000 000" bytes. Many operating systems say "3.64 TiB" meaning
"3.64 * 1024 * 1024 * 1024 * 1024". They are two different ways of
expressing the same capacity.
> using 2 2 TB drives for a mirrored rpool, I create a 2 TB pool as 2 x 11 disk raidz2 vdevs and a 4 TB pool as 1 x 12 disk raidz2 vdev
Ok, so for each set of raidz2 disks, 2 are for parity, so you have 9
for data. 2 vdevs * 9 disks * 2TB = 36.0 TB = 32.7 TiB
On the other pool, for each set of raidz2 disks, 2 are for parity, so
you have 4 for data. 2 vdevs * 4 disks * 4TB = 32.0 TB = 29.1 TiB
> df shows
>
> rpool/ROOT/openindiana
> 1.8T 1.6G 1.8T 1% /
> rpool/smallbkup 1.6T 31K 1.6T 1% /rpool/smallbkup
> ccmbkup12TB 32T 68K 32T 1% /ccmbkup12TB
> fourTBpool 29T 56K 29T 1% /fourTBpool <<< I think I am 7 TB short here
See calculation above :-). It matches up exactly.
As your pools fill up, people suggest using "zfs list" to get actual
usable capacity, as df gets confused by compression, deduplication,
and other things. Your pools are empty, so the numbers happen to
match.
Cheers,
Jan
More information about the OpenIndiana-discuss
mailing list