[OpenIndiana-discuss] ZFS with Dedupication for NFS server
Toomas.Soome at mls.ee
Fri Apr 22 18:41:38 UTC 2011
well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can fit 5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M entries.... thats with default metadata limit. unless i did my calculations wrong, that will explain the slowdown.
On 22.04.2011, at 21:19, Roy Sigurd Karlsbakk wrote:
> That's theory, in practice, even with sufficient RAM/L2ARC and some amount of SLOG, dedup slows down writes to a minimum. My test was done with 8TB net storage, 8GB RAM, and two 80GB x25-M SSDs devided into 2x4GB SLOG (mirrored) and the rest for L2ARC. Application tested was Bacula with the OI box as a storage agent (bacula-sd). Performance was ok until about 1TB was used, dedup numbers were low, since it was during the initial backup, but write speed was down to the 10s MB/s.
> ----- Original Message -----
>> the basic math behind the scenes is following (and not entirely
>> 1. DTT data is kept in metadata part of ARC;
>> 2. metadata default max is arc_c_max / 4.
>> note that you can rise that limit.
>> 3. arc max is RAM - 1GB.
>> so, if you have 8GB of ram, your arc max is 7GB and max metadata is
>> 1.75GB. so, with server with 8GB of ram, your server will store MAX
>> 1.75GB DTT in arc. the DTT entry is told to take 250B.
>> now the tricky part - those numbers are max values; but you also need
>> some space to store "normal" metadata, not just DTT. also, you cant
>> really distinguish DTT from other metadata - that will leave space for
>> some guessing unfortunately.
>> for perfomance consideration, even if you have enough ram and l2arc,
>> the arc warmup time is more critical, as currently the l2arc contents
>> will be lost on reboot and arc contents as well obviously - thats the
>> downside of having dedupe integrated into filesystem.
>> On 22.04.2011, at 0:48, Eric D. Mudama wrote:
>>> On Thu, Apr 21 at 14:12, James Kohout wrote:
>>>> Been running opensolaris 134 with a 9T RaidZ2 array as a backup
>>>> in a production environment. Whenever I tried to turn the ZFS
>>>> deduplication I always had crashes and other issues, which I most
>>>> attributed to the know ZFS dedup bugs in 134. Once I rebuild the
>>>> without dedup, things have been running great for several months
>>>> a glitch. As a result, I am highly confident it was not a hardware
>>>> So looking to upgrade to io148 to be able to enable deduplication.
>>>> does have any experience running a ZFS RaidZ2 pool with
>>>> deduplication in
>>>> a production environment? Is ZFS deduplication in oi148 considered
>>>> stable/production ready? I would hate to break a working setup
>>>> chasing a
>>>> feature that is not ready.
>>>> Any feedback, experience would be appreciated.
>>> The summary of list postings over the last 6 months is that dedup
>>> requires way more RAM and/or L2ARC than most people budgeted in
>>> to work as smoothly as a non-dedup installation, and that when under
>>> budgeted in RAM/L2ARC, the performance of scrubs and snapshot
>>> is atrocious.
>>> I don't have the math handy on the memory requirements, maybe
>>> can post that part of the summary.
>>> Eric D. Mudama
>>> edmudama at bounceswoosh.org
>>> OpenIndiana-discuss mailing list
>>> OpenIndiana-discuss at openindiana.org
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
> Vennlige hilsener / Best regards
> Roy Sigurd Karlsbakk
> (+47) 97542685
> roy at karlsbakk.net
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
More information about the OpenIndiana-discuss