[OpenIndiana-discuss] Is there anything I can do to tune caching
Grant Albitz
GAlbitz at Albitz.Biz
Tue Jan 17 15:05:52 UTC 2012
i installed solaris on p0 of one of the ssds, performed a software raid 1 to the other ssd for boot, and then used thh remaining partition on the ssds as the cache device. this is why you see the cache drives as c2t12d0p2 and c2t13d0p2
________________________________________
From: Andy Lubel [alubel at gmail.com]
Sent: Tuesday, January 17, 2012 9:33 AM
To: Discussion list for OpenIndiana
Cc: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] Is there anything I can do to tune caching
I'm curious, where is the OS installed? Also did you compile crystaldisk for Solaris or run it over network from a windows box? Perhaps looking into file bench would give you more numbers to look at and depending on what you plan on doing with the nas, help decide if it is tuned correctly.
On Jan 16, 2012, at 21:33, Grant Albitz <GAlbitz at Albitz.Biz> wrote:
> Sorry my second table formatting was lost.
>
> The ssds each have ~24gigs used with ~190g free.
>
> -----Original Message-----
> From: Grant Albitz [mailto:GAlbitz at Albitz.Biz]
> Sent: Monday, January 16, 2012 9:30 PM
> To: Discussion list for OpenIndiana (openindiana-discuss at openindiana.org)
> Subject: [OpenIndiana-discuss] Is there anything I can do to tune caching
>
> I just completed my server and I am generally happy with it. I am running openindiana on a dell r510 with 64gb of memory and 12 2tb 7200rpm sas drives. I have 2 256g Samsung 830s as l2arche. I have noticed a few things. First the l2arche is basically not being populated at all (only 20g or so). Also when running a crystaldisk benchmark against the lun on the zfs server, small writes seem to goto ram. If I perform a test with 100m I get about 800MB/s writes. If I increase that to 500m I get only about 80MB/s. It seems that the larger writes go directly to the disk. I have disabled zil with the expectation that all writes would goto ram but this may not be the case. Below are some numbers that I pulled. Is there anyway to increase l2arche usage and also ram usage? I understand that the l2arch may not be populated if most of the activity can fit in the ram (and it probably can, my total used space is only 300gigs and right now I am testing this with single user access). But if
t he reason l2arch isn't being populated is due to ram availability, then why are some of my writes skipping ram and going directly to the disk?
>
> ARC syncronous write cache
> System Memory:
> Physical RAM: 65515 MB
> Free Memory : 10187 MB
> LotsFree: 1023 MB
>
> ZFS Tunables (/etc/system):
>
> ARC Size:
> Current Size: 48921 MB (arcsize)
> Target Size (Adaptive): 48921 MB (c)
> Min Size (Hard Limit): 8061 MB (zfs_arc_min)
> Max Size (Hard Limit): 64491 MB (zfs_arc_max)
>
> ARC Size Breakdown:
> Most Recently Used Cache Size: 93% 45557 MB (p)
> Most Frequently Used Cache Size: 6% 3364 MB (c-p)
>
> ARC Efficency:
> Cache Access Total: 72661651
> Cache Hit Ratio: 84% 61140347 [Defined State for buffer]
> Cache Miss Ratio: 15% 11521304 [Undefined State for Buffer]
> REAL Hit Ratio: 76% 55338630 [MRU/MFU Hits Only]
>
> Data Demand Efficiency: 97%
> Data Prefetch Efficiency: 35%
>
> CACHE HITS BY CACHE LIST:
> Anon: 8% 5398251 [ New Customer, First Cache Hit ]
> Most Recently Used: 44% 27083442 (mru) [ Return Customer ]
> Most Frequently Used: 46% 28255188 (mfu) [ Frequent Customer ]
> Most Recently Used Ghost: 0% 141042 (mru_ghost) [ Return Customer Evicted, Now Back ]
> Most Frequently Used Ghost: 0% 262424 (mfu_ghost) [ Frequent Customer Evicted, Now Back ]
> CACHE HITS BY DATA TYPE:
> Demand Data: 65% 40237050
> Prefetch Data: 9% 5777420
> Demand Metadata: 24% 15098015
> Prefetch Metadata: 0% 27862
> CACHE MISSES BY DATA TYPE:
> Demand Data: 7% 864227
> Prefetch Data: 91% 10540593
> Demand Metadata: 0% 112421
> Prefetch Metadata: 0% 4063
> ---------------------------------------------
>
>
> pool
>
> alloc
>
> free
>
> read
>
> write
>
> read
>
> write
>
> -------------
>
> -----
>
> -----
>
> -----
>
> -----
>
> -----
>
> -----
>
> PSC.Net
>
> 442G
>
> 10.4T
>
> 0
>
> 0
>
> 0
>
> 0
>
> mirror
>
> 73.6G
>
> 1.74T
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t0d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t1d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> mirror
>
> 73.8G
>
> 1.74T
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t2d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t3d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> mirror
>
> 73.8G
>
> 1.74T
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t4d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t5d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> mirror
>
> 73.8G
>
> 1.74T
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t6d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t7d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> mirror
>
> 73.7G
>
> 1.74T
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t8d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t9d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> mirror
>
> 73.8G
>
> 1.74T
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t10d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t11d0
>
> -
>
> -
>
> 0
>
> 0
>
> 0
>
> 0
>
> cache
>
> -
>
> -
>
> -
>
> -
>
> -
>
> -
>
> c2t12d0p2
>
> 24.8G
>
> 194G
>
> 0
>
> 0
>
> 0
>
> 0
>
> c2t13d0p2
>
> 24.4G
>
> 194G
>
> 0
>
> 0
>
> 0
>
> 0
>
>
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss at openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
More information about the OpenIndiana-discuss
mailing list