[OpenIndiana-discuss] Simple zfs vs zpool space question.
Michael Hase
michael at edition-software.de
Fri Apr 11 16:06:24 UTC 2014
John,
On Fri, 11 Apr 2014, John McEntee wrote:
> Michael,
>
> You had me worried then an doubt my own sanity. I just remove the 2 drives and re-added them.
Well, no intend to worry people. But in this case I think it's better to
look a bit closer. Again: are you sure the 2 drives where really removed
when you told them to? What did you do to remove them?
To my knowledge the block pointer rewrite project was never addressed,
even Matt Ahrens said it's too difficult. So you can't remove a data
device from a zpool.
If your zpool iostat output really resembles the actual situation, I think
your only option is to attach additional disks to the stale devices.
Cheers,
Michael
>
> # zpool add tank log c1t500A075103053202d0p2
> # zpool add tank log c1t500A07510306F9A7d0p2
> # zpool iostat -v
> capacity operations bandwidth
> pool alloc free read write read write
> ------------------------- ----- ----- ----- ----- ----- -----
> rpool 13.2G 26.5G 0 19 3.01K 89.2K
> mirror 13.2G 26.5G 0 19 3.01K 89.2K
> c1t5000CCA216DA22DEd0p1 - - 0 9 6.33K 89.6K
> c1t500A075103053202d0s0 - - 0 10 5.60K 89.6K
> c1t500A07510306F9A7d0s0 - - 0 10 4.59K 89.6K
> ------------------------- ----- ----- ----- ----- ----- -----
> tank 13.9T 5.10T 106 1.36K 5.23M 9.47M
> mirror 1.99T 746G 15 199 765K 1.35M
> c1t5000CCA225C5244Ed0 - - 3 86 350K 1.35M
> c1t5000CCA225C54DDDd0 - - 3 86 353K 1.35M
> c1t5000CCA225C505B8d0 - - 3 86 350K 1.35M
> mirror 1.99T 746G 15 199 766K 1.35M
> c1t5000CCA225C50784d0 - - 3 86 349K 1.35M
> c1t5000CCA225C5502Ed0 - - 3 86 352K 1.35M
> c1t5000CCA225C49869d0 - - 3 86 352K 1.35M
> mirror 1.99T 746G 15 199 766K 1.35M
> c1t5000CCA225C54ED8d0 - - 3 86 351K 1.35M
> c1t5000CCA225C56814d0 - - 3 86 351K 1.35M
> c1t5000CCA225C4E775d0 - - 3 86 350K 1.35M
> mirror 1.99T 746G 15 199 765K 1.35M
> c1t5000CCA225C2ADDAd0 - - 3 85 351K 1.35M
> c1t5000CCA225C04039d0 - - 3 85 352K 1.35M
> c1t5000CCA225C53428d0 - - 3 85 352K 1.35M
> mirror 1.99T 746G 15 199 766K 1.35M
> c1t5000CCA225C50517d0 - - 3 85 352K 1.36M
> c1t5000CCA225C55025d0 - - 3 85 352K 1.36M
> c1t5000CCA225C5660Dd0 - - 3 85 351K 1.36M
> mirror 1.99T 745G 15 199 764K 1.35M
> c1t5000CCA225C5502Dd0 - - 3 85 350K 1.35M
> c1t5000CCA225C484A3d0 - - 3 85 351K 1.35M
> c1t5000CCA225C4824Dd0 - - 3 85 352K 1.35M
> mirror 1.99T 746G 15 199 766K 1.35M
> c1t5000CCA225C4E366d0 - - 3 86 351K 1.35M
> c1t5000CCA225C54DDCd0 - - 3 85 352K 1.35M
> c1t5000CCA225C56751d0 - - 3 85 351K 1.35M
> c1t500A075103053202d0p2 3.35M 7.93G 0 98 4.46K 1.32M
> c1t500A07510306F9A7d0p2 3.37M 7.93G 0 22 12.3K 464K
> cache - - - - - -
> c1t500A075103053202d0p3 143G 7.88M 21 4 464K 487K
> c1t500A07510306F9A7d0p3 143G 7.88M 21 4 471K 488K
> ------------------------- ----- ----- ----- ----- ----- -----
>
> #
>
> And it is still not labelling them as logs on the output, I assume this is due to running oi_148 unless I have done something wrong.
>
> I know the logs are not a mirror, but I am going for performance over the risk of an ssd failure at the time of a reboot, at which I believe I can just throw again the ZIL and continue? According to http://docs.oracle.com/cd/E19253-01/819-5461/ghbxs/
>
> Thanks
>
> John
>
>
>
> -----Original Message-----
> From: Michael Hase [mailto:michael at edition-software.de]
> Sent: 11 April 2014 15:59
> To: Discussion list for OpenIndiana
> Subject: Re: [OpenIndiana-discuss] Simple zfs vs zpool space question.
>
> On Fri, 11 Apr 2014, John McEntee wrote:
>
>> I have a zpool (3way mirror and striped across 21 disks), tank, that the zfs and zpool commands differ slightly.
>
> Are you sure it's a 3way mirror pool? Seems like you're running without any redundancy, look at devices c1t500A075103053202d0p2 and c1t500A07510306F9A7d0p2. If one of these breaks I think your pool is toast.
>
> Maybe you wanted to add zil devices and forgot to specify as such? It should look something like
>
> zpool iostat -v p1
> capacity operations bandwidth
> pool alloc free read write read write
> ----------- ----- ----- ----- ----- ----- -----
> p1 876G 980G 3 51 106K 295K
> mirror 529G 399G 1 18 47.8K 73.7K
> c4t1d0 - - 0 8 42.0K 74.5K
> c4t2d0 - - 0 8 42.1K 74.5K
> mirror 347G 581G 1 30 58.7K 141K
> c4t0d0 - - 0 12 42.6K 142K
> c4t3d0 - - 0 12 42.7K 142K
> logs - - - - - -
> c3t12d0s3 1.20M 7.94G 0 1 0 80.6K
> cache - - - - - -
> c3t12d0s0 48.0G 8M 1 0 13.9K 20.7K
> ----------- ----- ----- ----- ----- ----- -----
>
>>
>> # zpool list tank
>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
>> tank 19.0T 14.5T 4.51T 76% 1.00x ONLINE -
>>
>> # zfs list tank
>> NAME USED AVAIL REFER MOUNTPOINT
>> tank 14.5T 4.21T 38K /tank
>>
>>
>> There are no quotas. Please could some suggest why zpool states more space is available over zfs?
>
> Metadata?
>
> Cheers,
> Michael
>
>>
>> My average IOPS to each single SATA disk is 90, 1390 across the pool as a whole. Due to pushing each disk fairly hard , if the zpool gets more than 80% full the way free space is allocated changes and we have a severe performance hit. This makes (generally one) VMWare host virtually unusable until I free up some space. Does anyone know if this behaviour changes due to the result of zpool or zfs. i.e. which one should I carefully monitor?
>>
>> Thanks
>>
>> John
>>
>> P.S.
>>
>> # zpool iostat -v
>> capacity operations bandwidth
>> pool alloc free read write read write
>> ------------------------- ----- ----- ----- ----- ----- -----
>> rpool 13.2G 26.5G 0 19 3.01K 89.1K
>> mirror 13.2G 26.5G 0 19 3.01K 89.1K
>> c1t5000CCA216DA22DEd0p1 - - 0 9 6.30K 89.4K
>> c1t500A075103053202d0s0 - - 0 10 5.57K 89.4K
>> c1t500A07510306F9A7d0s0 - - 0 10 4.56K 89.4K
>> ------------------------- ----- ----- ----- ----- ----- -----
>> tank 14.5T 4.51T 106 1.39K 5.23M 10.1M
>> mirror 2.07T 659G 15 199 765K 1.35M
>> c1t5000CCA225C5244Ed0 - - 3 86 350K 1.35M
>> c1t5000CCA225C54DDDd0 - - 3 86 353K 1.35M
>> c1t5000CCA225C505B8d0 - - 3 86 350K 1.35M
>> mirror 2.07T 659G 15 199 765K 1.35M
>> c1t5000CCA225C50784d0 - - 3 86 349K 1.35M
>> c1t5000CCA225C5502Ed0 - - 3 86 352K 1.35M
>> c1t5000CCA225C49869d0 - - 3 86 352K 1.35M
>> mirror 2.07T 659G 15 199 766K 1.35M
>> c1t5000CCA225C54ED8d0 - - 3 86 351K 1.35M
>> c1t5000CCA225C56814d0 - - 3 86 351K 1.35M
>> c1t5000CCA225C4E775d0 - - 3 86 350K 1.35M
>> mirror 2.08T 659G 15 199 765K 1.35M
>> c1t5000CCA225C2ADDAd0 - - 3 85 351K 1.35M
>> c1t5000CCA225C04039d0 - - 3 85 352K 1.35M
>> c1t5000CCA225C53428d0 - - 3 85 352K 1.35M
>> mirror 2.07T 659G 15 199 766K 1.35M
>> c1t5000CCA225C50517d0 - - 3 85 352K 1.36M
>> c1t5000CCA225C55025d0 - - 3 85 352K 1.36M
>> c1t5000CCA225C5660Dd0 - - 3 85 351K 1.36M
>> mirror 2.08T 659G 15 199 764K 1.35M
>> c1t5000CCA225C5502Dd0 - - 3 85 350K 1.35M
>> c1t5000CCA225C484A3d0 - - 3 85 351K 1.35M
>> c1t5000CCA225C4824Dd0 - - 3 85 352K 1.35M
>> mirror 2.07T 659G 15 199 766K 1.35M
>> c1t5000CCA225C4E366d0 - - 3 86 351K 1.35M
>> c1t5000CCA225C54DDCd0 - - 3 85 352K 1.35M
>> c1t5000CCA225C56751d0 - - 3 85 351K 1.35M
>> c1t500A075103053202d0p2 6.89M 7.93G 0 15 1 298K
>> c1t500A07510306F9A7d0p2 6.73M 7.93G 0 15 1 298K
>> cache - - - - - -
>> c1t500A075103053202d0p3 143G 7.86M 20 4 463K 488K
>> c1t500A07510306F9A7d0p3 143G 7.75M 21 4 470K 488K
>> ------------------------- ----- ----- ----- ----- ----- -----
>>
>> ______________________________________________________________________
>> _
>>
>> The contents of this e-mail and any attachment(s) are strictly
>> confidential and are solely for the person(s) at the e-mail
>> address(es) above. If you are not an addressee, you may not disclose,
>> distribute, copy or use this e-mail, and we request that you send an
>> e-mail to admin at stirling-dynamics.com and delete this e-mail.
>> Stirling Dynamics Ltd. accepts no legal liability for the contents of
>> this e-mail including any errors, interception or interference, as
>> internet communications are not secure. Any views or opinions
>> presented are solely those of the author and do not necessarily
>> represent those of Stirling Dynamics Ltd. Registered In England No.
>> 2092114 Registered Office: 26 Regent Street, Clifton, Bristol. BS8 4HG
>> VAT no. GB 464 6551 29
>> ______________________________________________________________________
>> _
>>
>> This e-mail has been scanned for all viruses MessageLabs.
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
> _______________________________________________________________________
>
> The contents of this e-mail and any attachment(s) are strictly confidential and are solely for the person(s) at the e-mail address(es) above. If you are not an addressee, you may not disclose, distribute, copy or use this e-mail, and we request that you send an e-mail to admin at stirling-dynamics.com and delete this e-mail. Stirling Dynamics Ltd. accepts no legal liability for the contents of this e-mail including any errors, interception or interference, as internet communications are not secure. Any views or opinions presented are solely those of the author and do not necessarily represent those of Stirling Dynamics Ltd. Registered In England No. 2092114 Registered Office: 26 Regent Street, Clifton, Bristol. BS8 4HG
> VAT no. GB 464 6551 29
> _______________________________________________________________________
>
> This e-mail has been scanned for all viruses MessageLabs.
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
More information about the OpenIndiana-discuss
mailing list