[OpenIndiana-discuss] ZFS with Dedupication for NFS server
    Edward Ned Harvey 
    openindiana at nedharvey.com
       
    Sat Apr 23 13:10:56 UTC 2011
    
    
  
> From: Toomas Soome [mailto:Toomas.Soome at mls.ee]
> 
> well, do a bit math. if ima correct, with 320B DTT the 1.75GB of ram can
fit
> 5.8M entries, 1TB of data, assuming 128k recordsize would produce 8M
> entries.... thats with default metadata limit.  unless i did my
calculations
> wrong, that will explain  the slowdown.
Not sure where you're getting those numbers, but rule of thumb is to add
1-3G of ram for every 1T of unique dedup data.
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup 
    
    
More information about the OpenIndiana-discuss
mailing list