[OpenIndiana-discuss] Is there anything I can do to tune caching

Grant Albitz GAlbitz at Albitz.Biz
Wed Jan 18 02:21:14 UTC 2012


Unless you feel nfs would for one reason or another cause my larger sequentials writes to goto ram I don't think I will see an improvement in my scenario. From what I gather, writing large sequential transactions directly to disk is by design. I can understand that since you wouldn't want a 10g file copy to generally nuke all of your arc. That being said in my case I do want that to happen, I was hoping I might be able to change something for my scenario but that does not seem to be the case..



-----Original Message-----
From: Andy Lubel [mailto:alubel at gmail.com] 
Sent: Tuesday, January 17, 2012 4:01 PM
To: Discussion list for OpenIndiana
Cc: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] Is there anything I can do to tune caching

Have you considered NFS?  I had much better performance with the sun 7000 back when I was lucky enough to run VMware over 10gb NFS. Especially with the read/write characteristics of VMware. Not to Mention thin provisioning and easy expansion etc etc. 


On Jan 17, 2012, at 15:26, Grant Albitz <GAlbitz at Albitz.Biz> wrote:

> ISCSI over 10ge
> 
> ________________________________________
> From: Andy Lubel [alubel at gmail.com]
> Sent: Tuesday, January 17, 2012 2:39 PM
> To: Discussion list for OpenIndiana
> Subject: Re: [OpenIndiana-discuss] Is there anything I can do to tune caching
> 
> NFS or iscsi? The defaults usually work pretty good for nfs especially if using log devices.
> 
> 
> 
> On Jan 17, 2012, at 11:08, Grant Albitz <GAlbitz at Albitz.Biz> wrote:
> 
>> i wanted to make a correct, sync writes do utilize the zil, but based on what i am reading, large sync writes do not.
>> 
>> ________________________________________
>> From: Grant Albitz [GAlbitz at Albitz.Biz]
>> Sent: Tuesday, January 17, 2012 11:01 AM
>> To: Discussion list for OpenIndiana
>> Subject: Re: [OpenIndiana-discuss] Is there anything I can do to tune caching
>> 
>> my environment consists of 2 esxi hosts each connected to the san over a dedicated 10gb link. i ran the benchmark from inside one of my guest vms.
>> 
>> the performance  is definately good enough for the environment. I understand that by default zfs may not cache synchronous writes. I guess my question is can that be changed? i would rather have those writes goto ram then reserve ram for read items that may or may not be requested... Is it possible to create a ram disk in solaris and then use that as the zil? then again i read the zil is generally not used for synchronous writes so i might be back to the same point even if i implement somethign like that.
>> 
>> i understand the risk of writes going to ram but that can be addressed in other ways, i am just wondering if the synchronous caching can be enabled/tweaked.
>> 
>> also my l2arch devices remain fairly empty and not utilized, is there anyway to push zfs to cache more items or do i just need to let this build over time? is anyone aware of a time to live for items in the zfs cache? Given my relatively light workload i would prefer that items put in cache werent removed unless its running low on space. these may just turn out to be nice to haves that arent possible, i just figured they might be =)
>> 
>> 
>> 
>> ________________________________________
>> From: Grant Albitz [GAlbitz at Albitz.Biz]
>> Sent: Tuesday, January 17, 2012 10:05 AM
>> To: Discussion list for OpenIndiana
>> Subject: Re: [OpenIndiana-discuss] Is there anything I can do to tune caching
>> 
>> i installed solaris on p0 of one of the ssds, performed a software raid 1 to the other ssd for boot, and then used thh remaining partition on the ssds as the cache device. this is why you see the cache drives as c2t12d0p2 and c2t13d0p2
>> 
>> 
>> ________________________________________
>> From: Andy Lubel [alubel at gmail.com]
>> Sent: Tuesday, January 17, 2012 9:33 AM
>> To: Discussion list for OpenIndiana
>> Cc: Discussion list for OpenIndiana
>> Subject: Re: [OpenIndiana-discuss] Is there anything I can do to tune caching
>> 
>> I'm curious, where is the OS installed?  Also did you compile crystaldisk for Solaris or run it over network from a windows box?  Perhaps looking into file bench would give you more numbers to look at and depending on what you plan on doing with the nas, help decide if it is tuned correctly.
>> 
>> 
>> 
>> On Jan 16, 2012, at 21:33, Grant Albitz <GAlbitz at Albitz.Biz> wrote:
>> 
>>> Sorry my second table formatting was lost.
>>> 
>>> The ssds each have ~24gigs used with ~190g free.
>>> 
>>> -----Original Message-----
>>> From: Grant Albitz [mailto:GAlbitz at Albitz.Biz]
>>> Sent: Monday, January 16, 2012 9:30 PM
>>> To: Discussion list for OpenIndiana (openindiana-discuss at openindiana.org)
>>> Subject: [OpenIndiana-discuss] Is there anything I can do to tune caching
>>> 
>>> I just completed my server and I am generally happy with it. I am running openindiana on a dell r510 with 64gb of memory and 12 2tb 7200rpm sas drives. I have 2 256g Samsung 830s as l2arche. I have noticed a few things. First the l2arche is basically not being populated at all (only 20g or so). Also when running a crystaldisk benchmark against the lun on the zfs server, small writes seem to goto ram. If I perform a test with 100m I get about 800MB/s writes. If I increase that to 500m I get only about 80MB/s. It seems that the larger writes go directly to the disk. I have disabled zil with the expectation that all writes would goto ram but this may not be the case. Below are some numbers that I pulled. Is there anyway to increase l2arche usage and also ram usage? I understand that the l2arch may not be populated if most of the activity can fit in the ram (and it probably can, my total used space is only 300gigs and right now I am testing this with single user access). But 
 i
> f
>> 
>> 
>> t  he reason l2arch isn't being populated is due to ram availability, then why are some of my writes skipping ram and going directly to the disk?
>>> 
>>> ARC syncronous write cache
>>> System Memory:
>>>      Physical RAM: 65515 MB
>>>      Free Memory : 10187 MB
>>>      LotsFree:     1023 MB
>>> 
>>> ZFS Tunables (/etc/system):
>>> 
>>> ARC Size:
>>>      Current Size:             48921 MB (arcsize)
>>>      Target Size (Adaptive):   48921 MB (c)
>>>      Min Size (Hard Limit):    8061 MB (zfs_arc_min)
>>>      Max Size (Hard Limit):    64491 MB (zfs_arc_max)
>>> 
>>> ARC Size Breakdown:
>>>      Most Recently Used Cache Size:        93%    45557 MB (p)
>>>      Most Frequently Used Cache Size:       6%    3364 MB (c-p)
>>> 
>>> ARC Efficency:
>>>      Cache Access Total:            72661651
>>>      Cache Hit Ratio:      84%     61140347      [Defined State for buffer]
>>>      Cache Miss Ratio:     15%     11521304      [Undefined State for Buffer]
>>>      REAL Hit Ratio:       76%     55338630      [MRU/MFU Hits Only]
>>> 
>>>      Data Demand   Efficiency:    97%
>>>      Data Prefetch Efficiency:    35%
>>> 
>>>      CACHE HITS BY CACHE LIST:
>>>        Anon:                        8%      5398251               [ New Customer, First Cache Hit ]
>>>        Most Recently Used:         44%      27083442 (mru)        [ Return Customer ]
>>>        Most Frequently Used:       46%      28255188 (mfu)        [ Frequent Customer ]
>>>        Most Recently Used Ghost:    0%      141042 (mru_ghost)    [ Return Customer Evicted, Now Back ]
>>>        Most Frequently Used Ghost:  0%      262424 (mfu_ghost)    [ Frequent Customer Evicted, Now Back ]
>>>      CACHE HITS BY DATA TYPE:
>>>        Demand Data:                65%      40237050
>>>        Prefetch Data:               9%      5777420
>>>        Demand Metadata:            24%      15098015
>>>        Prefetch Metadata:           0%      27862
>>>      CACHE MISSES BY DATA TYPE:
>>>        Demand Data:                 7%      864227
>>>        Prefetch Data:              91%      10540593
>>>        Demand Metadata:             0%      112421
>>>        Prefetch Metadata:           0%      4063
>>> ---------------------------------------------
>>> 
>>> 
>>> pool
>>> 
>>> alloc
>>> 
>>> free
>>> 
>>> read
>>> 
>>> write
>>> 
>>> read
>>> 
>>> write
>>> 
>>> -------------
>>> 
>>> -----
>>> 
>>> -----
>>> 
>>> -----
>>> 
>>> -----
>>> 
>>> -----
>>> 
>>> -----
>>> 
>>> PSC.Net
>>> 
>>> 442G
>>> 
>>> 10.4T
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> mirror
>>> 
>>> 73.6G
>>> 
>>> 1.74T
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t0d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t1d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> mirror
>>> 
>>> 73.8G
>>> 
>>> 1.74T
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t2d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t3d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> mirror
>>> 
>>> 73.8G
>>> 
>>> 1.74T
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t4d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t5d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> mirror
>>> 
>>> 73.8G
>>> 
>>> 1.74T
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t6d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t7d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> mirror
>>> 
>>> 73.7G
>>> 
>>> 1.74T
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t8d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t9d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> mirror
>>> 
>>> 73.8G
>>> 
>>> 1.74T
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t10d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t11d0
>>> 
>>> -
>>> 
>>> -
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> cache
>>> 
>>> -
>>> 
>>> -
>>> 
>>> -
>>> 
>>> -
>>> 
>>> -
>>> 
>>> -
>>> 
>>> c2t12d0p2
>>> 
>>> 24.8G
>>> 
>>> 194G
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> c2t13d0p2
>>> 
>>> 24.4G
>>> 
>>> 194G
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 0
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenIndiana-discuss mailing list
>>> OpenIndiana-discuss at openindiana.org
>>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>> 
>>> _______________________________________________
>>> OpenIndiana-discuss mailing list
>>> OpenIndiana-discuss at openindiana.org
>>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>> _______________________________________________
>> OpenIndiana-discuss mailing list
>> OpenIndiana-discuss at openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
> _______________________________________________
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss at openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss

_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss at openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



More information about the OpenIndiana-discuss mailing list