[OpenIndiana-discuss] 800GB in 6 discs - 460GB in raidz

Harry Putnam reader at newsguy.com
Sat Mar 25 14:03:34 UTC 2017


jason matthews <jason at broken.net> writes:

> On 3/24/17 3:43 PM, Harry Putnam wrote:
>> I continue to have a problem understanding the output of zfs list.
>
> You may want zpool list, depending on what you trying to get. Let's
> see what you have done. please show us: zpool status p0

Ah, yes .. shows a major difference from zfs list

>> Ok, if that is correct then it means that 6 disks when totalled
>> individually adding up to 800+ GB has been reduced by nearly half to
>> accomodate raidz.

> Did you use raidz2 ? In any case, I almost never deploy
> raidz(2). Mirrors offer faster writes with just a minimal trade off in
> money and storage bays.

First a note: I moved on to running zfs send/recv but with a new and
bigger pool to recv.

On the OP arrangement
I did not use raidz2. I used raidz1 specifally as in
  zpool create p0 raidz1 disk disk disk disk disk disk

I've since changed things a bit by adding an additional disk of 416G
so now 7 disks... and this time with the additional disk in place I
did use raidz2, for what I guess would be some additional data
protection.

So with new setup:
2 at 26 G (mirrored pair for rpool)
===================================

zpool p0 discs
2 at 96  G .. 192 G
2 at 116 G .. 232 G
2 at 216 G .. 432 G
1 at 416 G .. 416 G
 ==============
   1272 G

zfs list (under raidz2 now) showed 460+ G available

But as you've pointed out `zpool list' shows a quite different
picture:

These stats are with 7 discs totalling 1272 G under raidz2

    zpool list p0
  NAME SIZE ALLOC FREE EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
  p0   668G 333G  335G        -   0%   49%  1.00x  ONLINE  -
  
  ===========================================================
  
   zfs list -r p0
  NAME          USED  AVAIL  REFER  MOUNTPOINT
  p0            237G   223G  42.7K  /z
  p0/testg     1.07M   223G   532K  /z/testg
  p0/testg/t1   532K   223G   532K  /z/testg/t1
  p0/vb         237G   223G  6.56G  /z/vb
  p0/vb/vm      230G   223G   230G  /z/vb/vm
  


>> This is an install on a vbox vm.  I created 6 more discs beyond 2 for
>> a mirrored rpool.

> The English to English translation is, you made two pools one of which
> has six disks in raidz and another pool that is a single set of
> mirrors
> Am I following you?

Yes, but as explained above the setup is different now, in that the
raidz is now raidz2 and there are 7 instead of 6 discs

> [snip some stuff i could not grok]
Sorry for my poor command of how to make myself understood.

>> So, in round figures it loses 50 % of available space in raidz
>> config.
>
> No it shouldnt. If you used raidz2 then you would loose 1/3 of your 6
> disk pool to parity.

`zpool list' agrees with your assesment above.

But slightly different story now in raidz2 with 7 discs
  668 G of 1272 G possible so close to 50 % in raidz2 [Note: see 
  `zpool list' above]

>> I have no experience with raidz and have only ever mirrored paired
>> discs.

> Good man. If Jesus was a storage engineer, that is how he would do it.

My reasons weren't quite so inspired though... My pea brain saw
mirroring as simpler.

>> I put these discs in raidz1 in a effort to get a little more out of the
>> total space.  But in fact got very very little more space.

> zfs set compression=lz4 p0

Thanks (I should have thought of that.)

>> I guessed that I would be left with around 600 GB of space.... based
>> on a wag only.
>
> Wag?

  `Wild assed guess'

>> This is all assuming I haven't made some boneheaded mistake or am
>> suffering from a boneheaded non-understanding of what `zfs list' tells
>> us.

[...]

> we'll have to review the bonehead part after you send me zpool status

[NOTE: understand that the data below is NOT the setup my OP was about]

 zpool status p0
    pool: p0
   state: ONLINE
    scan: none requested
  config:
          NAME        STATE     READ WRITE CKSUM
          p0          ONLINE       0     0     0
            raidz2-0  ONLINE       0     0     0
              c3t3d0  ONLINE       0     0     0
              c3t4d0  ONLINE       0     0     0
              c3t5d0  ONLINE       0     0     0
              c3t6d0  ONLINE       0     0     0
              c3t7d0  ONLINE       0     0     0
              c3t8d0  ONLINE       0     0     0
              c3t9d0  ONLINE       0     0     0
  errors: No known data errors

As you see, the setup is now under raidz2 but when I posted OP it was
6 discs under raidz1

[...] snipped good examples

Thanks for the good examples.

And thanks to all posters for showing great patience.

Viewing things with zpool list has made it clearer and I'm now gaining
some experience and understanding of raidz1 and raidz2

Now with your and others help I'm beginning to see that my old method
of paired disc mirrors is good for what I'm doing and I will be using
that technique once I get the hardware HOST reinstalled.




More information about the openindiana-discuss mailing list