[OpenIndiana-discuss] ZFS with Dedupication for NFS server

Roy Sigurd Karlsbakk roy at karlsbakk.net
Mon Apr 25 13:11:38 UTC 2011


> (2) L2arc is not simply a slower extension of L1arc as you seem to be
> thinking. Every entry in the L2arc requires an entry in the L1arc. I
> don't know what the multiplier ratio is, but I hear something between
> 10x and 20x. So if you have for example 20G of L2arc, that would
> consume something like 1-2G of ram.
> 
> Oddly enough, just google for this: L2ARC memory requirements
> And what you see is --- A conversation in which you, Roy, and I both
> participated. And it was perfectly clear that you require RAM
> consumption in order to support your L2ARC. So, I really don't know
> how any confusion came about here... You should be solidly aware by
> now, that enabling L2ARC comes with a RAM cost.

Sorry - had forgotten that thread, but you're right. Still, ZFS not using all its memory for the DDT, it seems, only (RAM-1GB)/4 (by default) for metadata, according to Toomas Soome (tsome @ #openindiana), meaning I hit the barrier at about 1,2TB unique dedup data on disk.

Does anyone know where I can find exact numbers for the RAM cost? I remember reading something about this last I did some testing, but I can't recall the RAM cost was as high as 10-20% of L2ARC. With these numbers in place, we could create a spreadsheet or even a webapp to allow for easy calculation, given (guessed) known average blocksize etc...

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
roy at karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.



More information about the OpenIndiana-discuss mailing list