[OpenIndiana-discuss] Zfs stability "Scrubs"
Jim Klimov
jimklimov at cos.ru
Sat Oct 13 09:38:48 UTC 2012
2012-10-13 2:06, Jan Owoc wrote:
> All scrubbing does is put stress on drives and verify that data can
> still be read from them. If a hard drive ever fails on you and you
> need to replace it (how often does that happen?), then you know "hey,
> just last week all the other hard drives were able to read their data
> under stress, so are less likely to fail on me".
Also note that there are different types of media that are
differently impacted by IOs. CDs/DVDs and tape can get more
scratches upon reads, SSDs wear out upon writes, while HDDs
in stable conditions ("good" heat, power and vibration) don't
care about doing IOs in terms of their media, though mechanics
of the head movement can wear out - thus, see the disk's
ratings (i.e. 24x7 or not) and vendor-assumed lifetime.
I heard a statement which I am ready to accept but can not
vouch for validity of, that by having the magnetic head
read the bits from the platter can actually help the media
hold its data, by aligning the magnetic domains to one of
their two "valid" positions. Due to brownian movement and
other factors, these miniature crystals can turn around
in their little beds and spell "zeroes" or "ones" with less
and less exactness. Applying oriented magnetic fields can
push them back into one of the stable positions.
Well, whether that was crap or not - I'm not ready to say,
but one thing that is more likely true is that HDDs have
ECC on their sectors. If a read produces repairable bad
data, the HDD itself can try to repair the sector in-place
or by relocation to spare area, perhaps by applying stronger
fields to discern the bits better, and if it succeeds - it
would return no error to the HBA and return the fixed data.
If the repair result was wrong, ZFS would detect incorrect
data and issue its own repairs, using other copies or raidzN
permutations. Also note that this self-repair takes time
while the HDD does nothing else, and *that* IO timeout can
cause grief for RAID systems, HBA reset storms and so on
(hence the "RAID editions" of drives, TLER and so on).
On the other hand, if you're putting regular stress on the
disks and see some error counters (monitoring!) go high,
you can preemptively order and replace aging disks, instead
of trying to recover from a pool with reduced redundancy
a few days or months later.
HTH,
//Jim Klimov
More information about the OpenIndiana-discuss
mailing list