[OpenIndiana-discuss] Interesting question about L2ARC
Sašo Kiselkov
skiselkov.ml at gmail.com
Tue Sep 11 07:01:42 UTC 2012
On 09/10/2012 04:44 PM, Dan Swartzendruber wrote:
> I got a 256GB Crucial M4 to use for L2ARC for my OI box. I added it to
> the tank pool and let it warm for a day or so. By that point, 'zpool
> iostat -v' said the cache device had about 9GB of data, but (and this is
> what has me puzzled) kstat showed ZERO l2_hits. That's right, zero.
>
> kstat | egrep "(l2_hits|l2_misses)"
> l2_hits 0
> l2_misses 1143249
>
> The box has 20GB of RAM (it's actually a virtual machine on an ESXi
> host.) The datastore for the VMs is about 256GB. My first thought was
> everything is hitting in ARC, but that is clearly not the case, since it
> WAS gradually filling up the cache device. Maybe it's possible that
> every single miss is never ever being re-read, but that seems unlikely,
> no? If the l2_hits was a small number, I'd think it just wasn't giving
> me any bang for the buck, but zero sounds suspiciously like some kind of
> bug/mis-configuration. primarycache and secondarycache are both set to
> all. arc stats via arc_summary.pl:
>
> ARC Efficency:
> Cache Access Total: 12324974
> Cache Hit Ratio: 87% 10826363 [Defined State
> for buffer]
> Cache Miss Ratio: 12% 1498611 [Undefined State
> for Buffer]
> REAL Hit Ratio: 68% 8469470 [MRU/MFU Hits Only]
>
> Data Demand Efficiency: 85%
> Data Prefetch Efficiency: 59%
>
> For the moment, I gave up and moved the SSD back to being my windows7
> drive, where it does make a difference :) I'd be willing to shell out
> for another SSD, but only if I can gain some benefit from it. Any
> thoughts would be appreciated (if this is too esoteric for the OI list,
> I can try the zfs discussion list - I am starting here because of common
> platform with the rest of the audience...)
I recommend you go to zfs at lists.illumos.org, since this is a very
ZFS-specific problem.
At first glance it's hard to tell why your l2arc is failing to fill up,
but my suspicion is that it has something to do with your workload. As a
recap, here's how the l2arc works:
* there is a feed thread (l2arc_feed_thread) that periodically scans
the end of the MRU/MFU lists of the ARC in order to capture buffers
before they are evicted and write them to l2arc
* l2arc by default only caches non-prefetch data (i.e. random-reads),
since it is primarily a tool to lower random access latency, not
increase linear throughput (the main pool is assumed to be faster
than l2arc in bulk read volume)
It is somewhat suspicious that your l2arc only contains 9GB of data. Run
the following command to check your l2arc growth in response to your
workloads:
# while sleep 2; do echo -------; kstat -m zfs -n arcstats | grep l2 ;\
done
Look for the l2_size parameter. If your workload's random-access portion
fits entirely into the ARC, then l2arc isn't going to do you any good.
If you do want to cache prefetched data in the l2arc as well (because
your l2arc devices have cumulatively higher throughput than your main
pool), try setting l2arc_noprefetch=0:
# echo l2arc_noprefetch/W0t0 | mdb -kw
Be advised though, that this might start an unending heavy writing
frenzy to your l2arc devices, which might burn through your l2arc
device's flash write cycles much faster than you'd want.
Cheers,
--
Saso
More information about the OpenIndiana-discuss
mailing list