[OpenIndiana-discuss] 800GB in 6 discs - 460GB in raidz
jason matthews
jason at broken.net
Fri Mar 24 23:33:49 UTC 2017
On 3/24/17 3:43 PM, Harry Putnam wrote:
> I continue to have a problem understanding the output of zfs list.
You may want zpool list, depending on what you trying to get. Let's see
what you have done. please show us: zpool status p0
> Ok, if that is correct then it means that 6 disks when totalled
> individually adding up to 800+ GB has been reduced by nearly half to
> accomodate raidz.
Did you use raidz2 ? In any case, I almost never deploy raidz(2).
Mirrors offer faster writes with just a minimal trade off in money and
storage bays.
>
> This is an install on a vbox vm. I created 6 more discs beyond 2 for
> a mirrored rpool.
The English to English translation is, you made two pools one of which
has six disks in raidz and another pool that is a single set of mirrors
Am I following you?
[snip some stuff i could not grok]
>
> So, in round figures it loses 50 % of available space in raidz
> config.
No it shouldnt. If you used raidz2 then you would loose 1/3 of your 6
disk pool to parity.
> I have no experience with raidz and have only ever mirrored paired
> discs.
Good man. If Jesus was a storage engineer, that is how he would do it.
>
> I put these discs in raidz in a effort to get a little more out of the
> total space. But in fact got very very little more space.
zfs set compression=lz4 p0
>
> I guessed that I would be left with around 600 GB of space.... based
> on a wag only.
Wag?
>
> This is all assuming I haven't made some boneheaded mistake or am
> suffering from a boneheaded non-understanding of what `zfs list' tells
> us.
>
> Apparenly raidz does not really save enough space to make it worth
> doing. That is, in a space for config sense. Nothing really gained
> over just paired mirrors.
we'll have to review the bonehead part after you send me zpool status
Here is the only system I have in raidz. It has 24 vdevs of 4 drives.
Each drive is 2TB.
In terms of raw storage, it looks like this:
jason at dbspare001:% echo '2*4*24' |bc
192
In terms of net storage it looks like this:
jason at dbspare001:% zpool list data
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 174T 111T 63.4T 63% 1.00x ONLINE -
I lost 18TB on 192TB raw pool or about 10% and I am using way more
parity disks than you are. my ratio is 1:4 you claim yours is 1:6 but
we'll see when you send zpool status p0
My 1:4 ratio is not optimal so dont copy this configuration.
More information about the openindiana-discuss
mailing list