[OpenIndiana-discuss] A ZFS related question: How successful is ZFS, really???

Doug Hughes doug at will.to
Mon Jan 12 15:34:17 UTC 2015


Couple of points and counter points from my own experience.
*) tape really isn't dead. No, really. at about $.01/GB/copy, and 1x10^20
bit error rate, you can't beat it. Use it for the right thing though. This
excels as an offline archival media with media lifetimes expected at around
30 years. Contrast that to cheap disk. Everybody forgets that you still
have to account for the chassis, memory, space, transportability, media
bandwidth, brackets, cpus, and power used by an online storage system.
Also, if you go with the really inexpensive disks to keep costs down, you
sacrifice 5 orders of magnitude of media reliability when compared to tape.
Seriously. There's a place for both. Places serious about data integrity
and offsite archive at good cost still use tape. it's not dead. It's not
even dying contrary to what some popular reading might say in some places.
You can backup zfs to tape using zfs diff to get your list and then
whatever mechanism you want to get those items to tape (rsync, tsm,
whatever). The point is, that full-filesystem-scan for backups is
definitely dead. Don't do that. There are better mechanisms. Full scan
doesn't scale.

*) ZFS is really cool and all. We use it a lot! To say it is unquestionably
the best might be a bit of an overstatement. GPFS is also really cool, and
arguably better in many ways. The erasure coding in the new de-clustered
RAID is better than ZFS in terms of rebuild time and certain rare causes of
data loss. GPFS allows easy separation of metadata onto fast storage for
index and search. The GPFS policy engine is way cool. You can arbitrarily
relayout your GPFS across storage. You can have pools of different
characteristics and do policy migration or data placement policies
arbitrarily to use them. (We STILL can't do any relayout of any kind on
ZFS. Argh).  The big downside is price, of course. There are plenty of free
ZFS solutions available but none for GPFS. BTRFS has a technological edge
over zfs in relayout as well. Why isn't it possible to shrink/relayout ZFS
still? IMHO this should have been delivered about 2 years ago. As far as
stability and marketshare go, ZFS gets the big win, naturally.


On Mon, Jan 12, 2015 at 9:56 AM, Hans J Albertsson <
hans.j.albertsson at gmail.com> wrote:

> Does anyone have anything beyond own impressions and war stories?
>
> Is anyone collecting statistics on storage solutions sold?
>
> Hans J. Albertsson
> From my Nexus 5
> Den 12 jan 2015 15:24 skrev "Schweiss, Chip" <chip at innovates.com>:
>
> > On Mon, Jan 12, 2015 at 8:17 AM, Andrew Gabriel <
> > illumos at cucumber.demon.co.uk> wrote:
> >
> > >
> > > Since you mention Sun/Oracle, I don't see them pushing ZFS very much
> > > anymore, although I am aware their engineers still work on it.
> > >
> >
> > Oracle pushes ZFS hard and aggressively.   I dare you to fill out their
> > contact form or download their virtual appliance demo.  Their sales
> people
> > will be calling within the hour.
> >
> > We just recently went through a bidding war on an HA + DR system with 1/2
> > PB useable storage with many vendors including Nexenta and Oracle.
> Oracle
> > was price competitive with Nexenta and is in my opinion a much more
> > polished product.
> >
> > We still chose to build our own on OmniOS because we could still do that
> > for about 1/2 the price of Oracle / Nexenta.  That's less than 1/4 the
> > price of 3par/HP, IBM, Dell/Compellent   BTW, our OmniOS build is on the
> > exact same hardware, Nexenta would have been.
> >
> > -Chip
> > _______________________________________________
> > openindiana-discuss mailing list
> > openindiana-discuss at openindiana.org
> > http://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> _______________________________________________
> openindiana-discuss mailing list
> openindiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>


More information about the openindiana-discuss mailing list