[OpenIndiana-discuss] ZFS with Dedupication for NFS server
Roy Sigurd Karlsbakk
roy at karlsbakk.net
Fri Apr 22 18:19:45 UTC 2011
That's theory, in practice, even with sufficient RAM/L2ARC and some amount of SLOG, dedup slows down writes to a minimum. My test was done with 8TB net storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG (mirrored) and the rest for L2ARC. Application tested was Bacula with the OI box as a storage agent (bacula-sd). Performance was ok until about 1TB was used, dedup numbers were low, since it was during the initial backup, but write speed was down to the 10s MB/s.
roy
----- Original Message -----
> the basic math behind the scenes is following (and not entirely
> determined):
>
> 1. DTT data is kept in metadata part of ARC;
> 2. metadata default max is arc_c_max / 4.
>
> note that you can rise that limit.
>
> 3. arc max is RAM - 1GB.
>
> so, if you have 8GB of ram, your arc max is 7GB and max metadata is
> 1.75GB. so, with server with 8GB of ram, your server will store MAX
> 1.75GB DTT in arc. the DTT entry is told to take 250B.
>
> now the tricky part - those numbers are max values; but you also need
> some space to store "normal" metadata, not just DTT. also, you cant
> really distinguish DTT from other metadata - that will leave space for
> some guessing unfortunately.
>
> for perfomance consideration, even if you have enough ram and l2arc,
> the arc warmup time is more critical, as currently the l2arc contents
> will be lost on reboot and arc contents as well obviously - thats the
> downside of having dedupe integrated into filesystem.
>
> On 22.04.2011, at 0:48, Eric D. Mudama wrote:
>
> > On Thu, Apr 21 at 14:12, James Kohout wrote:
> >> All,
> >> Been running opensolaris 134 with a 9T RaidZ2 array as a backup
> >> server
> >> in a production environment. Whenever I tried to turn the ZFS
> >> deduplication I always had crashes and other issues, which I most
> >> likely
> >> attributed to the know ZFS dedup bugs in 134. Once I rebuild the
> >> pool
> >> without dedup, things have been running great for several months
> >> without
> >> a glitch. As a result, I am highly confident it was not a hardware
> >> issue.
> >>
> >> So looking to upgrade to io148 to be able to enable deduplication.
> >> So
> >> does have any experience running a ZFS RaidZ2 pool with
> >> deduplication in
> >> a production environment? Is ZFS deduplication in oi148 considered
> >> stable/production ready? I would hate to break a working setup
> >> chasing a
> >> feature that is not ready.
> >>
> >> Any feedback, experience would be appreciated.
> >
> > The summary of list postings over the last 6 months is that dedup
> > requires way more RAM and/or L2ARC than most people budgeted in
> > order
> > to work as smoothly as a non-dedup installation, and that when under
> > budgeted in RAM/L2ARC, the performance of scrubs and snapshot
> > deletion
> > is atrocious.
> >
> > I don't have the math handy on the memory requirements, maybe
> > someone
> > can post that part of the summary.
> >
> >
> >
> > --
> > Eric D. Mudama
> > edmudama at bounceswoosh.org
> >
> >
> > _______________________________________________
> > OpenIndiana-discuss mailing list
> > OpenIndiana-discuss at openindiana.org
> > http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
More information about the OpenIndiana-discuss
mailing list