[OpenIndiana-discuss] Zfs stability "Scrubs"
Michael Stapleton
michael.stapleton at techsologic.com
Sat Oct 13 03:26:02 UTC 2012
I'm not a mathematician, but can anyone calculate the chance of the Same
8K datablock on Both submirrors "Going bad" on terabyte drives, before
the data is ever read and fixed automatically during normal read
operations?
And if you are not doing mirroring, you have already accepted a much
larger margin of error for the sake of $.
The VAST majority of data centers are not storing data in storage that
does checksums to verify data, that is just the reality. Regular backups
and site replication rule.
I am Not saying scubs are a bad thing, just that they are being over
emphasized and some people who do not really understand are getting the
wrong impression that doing scrubs very often will somehow make them a
lot safer.
Scrubs help. But a lot of people who are worrying about scrubs are not
even doing proper backups or regular DR testing.
Mike
On Fri, 2012-10-12 at 22:36 -0400, Doug Hughes wrote:
> So">?}?\, a lot of people have already answered this in various ways.
> I'm going to provide a little bit of direct answer and focus to some of
> those other answers (and emphasis)
>
> On 10/12/2012 5:07 PM, Michael Stapleton wrote:
> > It is easy to understand that zfs srubs can be useful, But, How often do
> > we scrub or the equivalent of any other file system? UFS? VXFS?
> > NTFS? ...
> > ZFS has scrubs as a feature, but is it a need? I do not think so. Other
> > file systems accept the risk, mostly because they can not really do
> > anything if there were errors.
> That's right. They cannot do anything. Why is that a good thing? If you
> have a corruption on your filesystem because a block or even a single
> bit went wrong, wouldn't you want to know? Wouldn't you want to fix it?
> What if a number in an important financial document changed? Seems
> unlikely, but we've discovered at least 5 instances of spontaneous disk
> data corruption over the course of a couple of years. zfs corrected them
> transparently. No data lost, automatic, clean, and transparent. The
> more data that we make, the more that possibility of spontaneous data
> corruption becomes reality.
> > It does no harm to do periodic scrubs, but I would not recommend doing
> > them often or even at all if scrubs get in the way of production.
> > What is the real risk of not doing scrubs?
> data changing without you knowing it. Maybe this doesn't matter on an
> image file (though a jpeg could end up looking nasty or destroyed, and
> mpeg4 could be permanently damaged, but in a TIFF or other uncompressed
> format, you'd probably never know)
>
> >
> > Risk can not be eliminated, and we have to accept some risk.
> >
> > For example, data deduplication uses digests on data to detect
> > duplication. Most dedup systems assume that if the digest is the same
> > for two pieces of data, then the data must be the same.
> > This assumption is not actually true. Two differing pieces of data can
> > have the same digest, but the chance of this happening is so low that
> > the risk is accepted.
> but, the risk of data being flipped once you have TBs of data is way
> above 0%. You can also do your own erasure coding if you like. That
> would be one way to achieve the same affect outside of ZFS.
> >
> >
> > I'm only writing this because I get the feeling some people think scrubs
> > are a need. Maybe people associate doing scrubs with something like
> > doing NTFS defrags?
> >
> >
> NTFS defrag would only help with performance. scrub helps with
> integrity. Totally different things.
>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the OpenIndiana-discuss
mailing list